I've been hacking on a new approach to letterfitting
and thought perhaps some of you might be interested (seeing as Simon got some good discussion out of his Atokern thread
). It's inspired by a deep learning concept (attention masking) but doesn't actually use a neural net, because I don't think neural nets are the right solution – we need an understandable model with fully adjustable parameters, not an inscrutable black box.
I have some preliminary results, and what's needed now is a solid tool to evaluate how different implementations for the various parts of the model would affect the quality of the output depending on font type (e.g. display-size hairline italic sans vs. caption-size blackletter). Since I don't think I'll find enough time to finish this by myself in the near future and didn't want this to languish on my hard drive forever, I wrote up what I have so far as a blog post. I'm happy to answer any questions here if my explanations are unclear.
Perhaps one of you is intrigued enough and willing to help me tinker with this as a team?