UC-only font with double unicodes

Dear TypeDrawers,
I’m about to produce an uppercase-only font with at least two unicodes per letter: One for itself and one for the lowercase variant (e.g. A: U+0041 & U+0061). My test results have all been positive.
Are there any known major issues?



  • We've used this fallback mechanism numerous times, and have never received complaints from our customers.
  • Why not just copy all of your uppercase glyphs into the lowercase slots! It is very quick.
  • Because it is less easy to maintain and requires more space.
  • That's how I produced my Ambicase fonts, and likewise haven't seen any problems. There is some potential issue brought up (in a now lost Typophile thread) about the ambiguity throwing a wrench in the computer's efforts to reconstruct an underlying text stream when processing pdf's in a certain way, but I decided for display font that was rather unlikely to come up and/or matter.
  • James ToddJames Todd Posts: 201
    I just modify my composite/accent script to fill in the lc for me. I rerun the script every time I change a base glyph so it doesn’t change anything in my workflow.
  • I would happily assign double Unicode for all cap fonts but those red markings in FontLab make me nervous. Wouldn't it be great if font editors could be set to all-caps mode. The lowercase glyphs would be hidden from view. Upon exporting you'd select double Unicode or components/kerning classes. About a third of my typefaces are all caps so I'd use it a lot. Currently, I do what James does.

    With multiple master, after the first master, classes and OpenType programming is worked out, I select every lowercase glyph using a select by color script, then delete. When all the masters are done, I use a script to regenerate the lowercase glyphs again. Maybe what I really need is an automatic double encoding script that I can run at the end.

  • James PuckettJames Puckett Posts: 1,363
    I just put components in all of the lowercase slots and Glyphs keeps everything in sync for me.
  • John HudsonJohn Hudson Posts: 1,021
    I would happily assign double Unicode for all cap fonts but those red markings in FontLab make me nervous.
    So create a .nam mapping file for all-caps fonts. The red markings indicate that the name-to-Unicode of the currently active mapping file doesn't match what you have. If you change the mapping to one in which you have double-mapped Unicodes for upper- and lowercase characters to single glyphs, the red markings will disappear. The .nam files are stored in the user/FontLab/Shared folder, and the text format is easy to figure out: it's just list of Unicode values and glyph names with a simple header.
  • I just put components in all of the lowercase slots and Glyphs keeps everything in sync for me.
    Including kerning?
  • edited August 2016
    @Paul van der Laan Yes, if you give them the same kerning classes.
  • Kent LewKent Lew Posts: 560
    For those doing their production with RoboFont, be aware that you can’t generate a font with double-encoding directly via RF’s native generation.

    I’m pretty sure this has to do with how RF calls ufo2fdk to compile.

  • Kent LewKent Lew Posts: 560
    Addendum: I believe there is also an issue with makeotf (which may be why RF doesn’t bother generating a base font with multiple mapping).

    The GlyphOrderAndAliasDB does not support multiple encodings. So you need to find a way around that, as well, for OTFs.
  • I think the same may be true for Glyphs, for the same reason.
  • John HudsonJohn Hudson Posts: 1,021
    So because Adobe need one-to-one glyph-to-Unicode mappings for Acrobat text reconstruction they made their font compiler enforce this, contrary to the cmap table specification?

    Yet more happy that I don't rely on any FDK-based workflow.
  • ufo2fdk seems to be handling UFOs multiple Unicode values per glyphs and it generates the cmap table on ots own, so it should work unless makeotf is overwriting the cmap table.
  • Kent LewKent Lew Posts: 560
    The base font that ufo2fdk compiles to hand off to makeotf does respect multiple encoding in the UFO. But, for whatever reason, the same base font from RoboFont calling ufo2fdk does not (which is inspectable if you Save FDK Parts Next To UFO in preferences). I suspect that Frederik may have subclassed the OutlineOTFCompiler and changed the makeUnicodeToGlyphMapping routine.

    Further, by default makeotf will use the GlyphOrderAndAliasDB to set the unicodes in the OTF it generates, and the GOADB format does not permit of multiple encoding. So, yes, it is overwriting the cmap table.

    I think this Adobe “enforcement” is a side-effect of the GOADB approach, which has other benefits, and the default behaviors.

    It believe it may be possible to work around this limitation if you address makeotf directly from the command line and add appropriate flags. (I haven’t experimented yet to test by-passing the GOADB and find the right combination of flags).

    Another other option would be to post-process the OTF with TTX and re-overwrite the cmap table.

  • Or if you are staring with UFO sources, use ufo2ft or fontmake which do not use Adobe’s FDK.
Sign In or Register to comment.