Proofing Kern On results - How do you review thousands of pairs?

I decided to try Kern On with my latest typeface. After deleting all the small pairs (less than 5) and cleaning up some garbage (pairs for CALT glyphs) I still have over 7,500 pairs in each master. And the kern values are terrible for many pairs. Going through them one by one in Glyphs would probably take longer than just kerning the typeface in MetricsMachine. How are people dealing with this? Are people just crossing their fingers and hoping that Kern On did a good job? 
Tagged:

Comments

  • Igor Petrovic
    Igor Petrovic Posts: 300
    edited June 2022
    In addition to your questions, does Kern On requires predefined kerning classes?
  • John Hudson
    John Hudson Posts: 3,210
    In addition to your questions, does Kern On requires predefined kerning classes?
    No. I run it without kerning classes, and then do class compression after the fact.

    And the kern values are terrible for many pairs. 

    Define models for some of those pairs to adjust the output?
  • John Hudson
    John Hudson Posts: 3,210
    Another way to use Kern On is to do manual kerning for the core of your design, and then use that kerning as a model to extend the kerning into the edge cases.
  • John Hudson
    John Hudson Posts: 3,210
    [To date, I have only used Kern On for syllabic scripts where the number of potential pairs is more than can reasonably be kerned manually.]
  • Thomas Phinney
    Thomas Phinney Posts: 2,892
    I kind of figure that one might gain that trust over time? If the algorithm is good enough.
  • Aren't you always manually checking to see if your kerning is good? Whether that kerning is automated or not. I would assume that it's the stage before that, the bulk of the kerning work, that is sped up by a tool like Kern On. Furthermore, I have the feeling that a few silly mistakes tend to slip through the net when I kern manually, that may have never occurred when kerning was automated. Just to defend a plugin I have never used ;)
  • Ray Larabie
    Ray Larabie Posts: 1,432
    Are there any reliable autokerning systems that don't require Apple hardware?
  • Jens Kutilek
    Jens Kutilek Posts: 364
    I can’t comment on its reliability, but DTL KernMaster is available for Windows.
  • Andrea T.
    Andrea T. Posts: 40
    edited July 2022
    I started to use KO recently, and it works fine for me. I usually start using the interactive KO engine and then I prepared some tabs with the most common kerning pair in latin script (and cyrillic when is present) and I correct and refine the kerning directly.

    Usually I ended up with having at least 100 models and some exceptions. After that I test the kerning on some sheets I made on indesign and web browser, refining every time going back and forth in the master file and in the test specimen. I have to be sure to exclude from the kerning engine a lot of glyphs and I tend to keep the kern table as small as possible (15kb to 25 kb). I also run a lots of scripts that helps me to check the result (kerncrasher, overkerned pairs, large pairs all of them from mekkablue library).

    It is a long process and full of small mistakes. But at the end I will have a more than acceptable kern mostly extensive, saving a lot of time and my mental health.

    At the end I accept some imperfections in my kern (at least in the less important pairs) it is the nature of automatic generated stuff, I always think that my eyes are much more reliable than a computer, but I'm not sure anymore if this is true or just my bias (I always thought computers are just fast and stupid). 
  • FettleFoundry
    FettleFoundry Posts: 10
    edited November 27
    I’ve been experimenting with Kern On for a recent typeface and found that although I can iterate for common pairs and get the results looking quite good, I still end up with issues with edge cases that I need to find a good process for handing.

    For example, I end up with kerning pairs for things like ‘yhook-twosuperior’ and ‘ellipsis-hyphen’ which feel superfluous and are often significantly over-kerned. Setting models for these helps with the results, but ends up being a very time consuming task.

    Kern-A-Lytics has really helped with identifying weird edge cases, as well as any other inconsistencies across masters. 

    I’m thinking that I could set some problematic glyphs to ‘no kerning’ within KO, but in many cases I’d like them to be kerned against other characters that are relevant. Some of the ‘special spacing’ kerning groups seem to not have the results I would have expected them to in terms of affecting the glyphs they end up kerned against. 

    I think it could be useful to my KO workflow to have a list of ‘problematic pairs’ to help with setting models, but I could probably use that time better to manually kern the entire typeface as I want it. 

    I’ve really enjoyed using it, and I can see it becoming part of my process once I’ve had the time to refine how I use it, but right now I’m just not sure how it speeds things up for me.