I trained a neural network to kern a font (mostly)

2»

Comments

  • Well, leaving aside the legal implications for the moment (because let's face it lots of us could have opinions but none of us really knows...) I think I'm basically at the stage where it's time to ask the practical questions: What sort of interface would be best for type designers to have to this?

    Right now it ingests a font and outputs kerning values for most common glyphs. If you had a magic autokerner built into your font editor, what should it look like, what options should it have, and what should it do? For example, I've found having a scaling slider useful, to give you a bit of manual control. Should it override or (optionally) keep existing kerns? Should it process all possible glyph pairs or do you want to define ranges to kern, or...?
  • John HudsonJohn Hudson Posts: 1,440
    Is it possible to make the tool iterative, and to take into account explicit feedback from the user? That is, could I run the tool, and then tweak some number of the kern pairs, and then run the tool again and have it take those tweaks into account as control values? That's what I've always wanted from an auto-kerner.
  • Dave CrosslandDave Crossland Posts: 988
    edited January 1
    Whatever advanced options there are, there should be a fully automated "I do not care, just give me what you can" mode. 

    Can it do spacing plus kerning, or only kerning?
  • Simon CozensSimon Cozens Posts: 297
    Just kerning. We have autospacers already. (See various comments above.) But since there’s clear demand I will give it a go when I come back from holiday. Autospacing will need regression, which is less reliable than classification. It would be interesting to try.

    Currently there are basically no knobs to twiddle. You feed it a list of glyphs and it kerns the pairs. Presumably when I integrate it into an editor, we can have options like “kern selected pairs keeping/overriding existing kerning” and “kern whole font keeping/overriding existing kerning”. I don’t yet know what to do about group kerning. It sort of can identify its own groups, but that’s not broken out into a feature yet.

    I would really love to do what John suggests and have a “learn from my examples,” a kind of supervised learning thing. But I am not sure how to implement that. It certainly wouldn’t be possible in the existing framework; the network has “learnt” how to kern about a million pairs, so showing it another five or ten would not noticeably change the way it operates. I don’t know how to separate out the “idea” of fitting shapes together with the execution of particular styles of kerning. So currently you just get what you’re given.
  • Kent LewKent Lew Posts: 782
    I would really love to do what John suggests and have a “learn from my examples,” a kind of supervised learning thing. But I am not sure how to implement that.
    You could provide the tool as a blank slate and allow a user such as John to train it on his own past examples. Then provide a mechanism for the user to specify/tweak a handful of fundamental pairs, and have the tool implement others based on the new values plus the learning from past examples.

    Just like teaching an intern. ;-)

  • For me two settings would be interesting.

    1) Interpolate between existing kerning and script kerning. So you can start with no kerning, or use any rough or refined, or previously created kerning, and interpolate between that and what the script gives you in any one run.

    2) Configurable range between tight and loose kerning, so you can bias the script to produce values that fit the style and overall metrics of your font.


    Maybe as an easier first approach the script could only calculate kerning for selected glyphs, or selected pairs, and give it a low threshold. The group interface from Bubble kern comes to mind.
    The other feature for detecting kern groups could be separate at first, maybe even just output a list of suggested same groups. This is something that the designer should define, imo, but the script could assist. Further down the road, the script could read in a font’s kerning groups to make better guesses at groups, and deduct suggested values with greater certainty.
  • Simon CozensSimon Cozens Posts: 297
    Kent Lew said:

    You could provide the tool as a blank slate and allow a user such as John to train it on his own past examples.
    Well, this is what we have already. The code in the repository provides a complete blank slate - it doesn't know anything at all. Right now, I have an instance running that has been training for just over a week on a bunch of fonts, looking just under 200 million kern pairs. It's doing OK on the whole, but sometimes misses really obvious opportunities. It will probably require another week or so to get really good at kerning.

    The point is, you can't just show it a few examples and go from tabula rasa to reliable results in a user-friendly amount of time. You have to bake in a lot of knowledge to the model in advance, and that means that this...
    Then provide a mechanism for the user to specify/tweak a handful of fundamental pairs, and have the tool implement others based on the new values plus the learning from past examples. 

    Just like teaching an intern. ;-)
    becomes really tricky. Because by this stage, the intern really has a pretty fixed idea of how it wants to kern and won't listen to your suggestions.

    I am sure there is some clever way to combine pre-existing models with new parameters, but I basically don't know enough about machine learning to implement whatever it is. It would not be a mainstream neural network, but some pretty advanced technique. Because I don't know how to do that, I would prefer to make "you get what you're given" as good as possible. I don't mind the output of this thing being good enough and free but fairly inflexible, particularly since what you want may be possible with Black[Kerner] from Foundry Black once it becomes available: https://black-foundry.com/blog/blackspacer-blackkerner/
  • One way to ‘adjust’ the kerning values could be to ‘prepare’ the input by increasing/decreasing the spacing and then substract/add that value to the resulting kern pairs. 
Sign In or Register to comment.