Metric and em size

In .otf, the em size is usually 1000.
However in some fonts (for example the EB Garamond) the effective size of the glyphs exceeds that measure it adopts a standard partitions of 800 for the ascender and of 200 for the discender, but then the actual descender of p and q, for example , is greater, so much so that there is an undershooting zone defined by [-291 -281].
Is it a technique that can cause drawbacks?
Thanks

Comments

  • John HudsonJohn Hudson Posts: 2,955
    The em is the body height of the type that is scaled to the text size in applications. So, for example, if type is set to 10pt in an application, it is the em height of the font that equals 10 points.

    The actual size of the glyph outlines in the font can have any relationship to the em, but are typically either close to the em height (ascender + descender in a typical Latin font), or slightly less than the em height (very notably smaller in some cases e.g. Eric Gill’s Perpetua type). The situation you describe, in which the outlines are notably large compared to the em height is unusual, and yes, it could have drawbacks. Since it is the em that is scaled to the text size in applications, if parts of the glyphs significantly exceed the em height, then a text setting with tight leading (interline spacing) will appear tighter than in other, more typical fonts.
  • Thanks for the answer, technically tough as always.
    Does this however mean that modest deviations, for example of about fifteen points, are permissible or that it is always (much) better to stay within 1000?
  • John HudsonJohn Hudson Posts: 2,955
    edited May 2022
    A modest overshoot of the em is usually okay, and typically at least some parts of a glyph set of a digital font—accent marks on caps or above ascenders, usually—will exceed the em height.

    What may have happened in the EB Garamond example you cite is that the design originally had shorter descenders, but these were later lengthened. Instead of rescaling the whole design, the new descenders were allowed to overshoot the bottom of the em. That’s jusy a guess, though.
  • Is it a technique that can cause drawbacks?

    In such cases, if the Ascender/
    Descender and the TypoAscender/TypoDescender values are safely increased inside the hhea and the OS/2 tables, respectively; no clipping will occur. In your example, if the Descender is still at -200, clipping may occur at 1em line height.

    About overshoots, I have found around 4 to 10 units (on 1000 UPM) and 15 to 20 units (on 2000/2048 UPM) looks okay. Beyond that they are recognizable. Though it will depend upon the design and it's curves. For rounded terminals in an uppercase 'A', around 4 units undershoot looks fine, whereas in case of an 'o', it may go up to 10 units (for 1000 UPM).
  • Dave CrosslandDave Crossland Posts: 1,389
    Since a line spacing of 1.2em is typical, not exceeding the UPM size by 20% is good. 

    Vertical metrics also effect how text engines handle the situation of ink outside the UPM. 

  • Historically font size was equal body size and equal line advance (baseline to baseline).

    They had no point measures, but they used fractions of an inch, which was maybe different between countries or regions, but not so much.

    In historical prints before 18th century we see overshoots (or undercuts), i.e. ascenders or descenders "hanging" over the edge of the body. They can come into conflict, if a descender of the previous line meets an ascender of the next line. Paper was expensive.

    They also had leading what they called "grobe" in German. Something like 10 pt on a 11 pt body. 

    Back to metrics. For empirical statistics of fonts and calculating back from scans of historical prints I normalise EM-size to 100. That's better readable as 1 unit is 1% of EM-size.

    In Latin fonts the baseline is easy to detect. Next x-height in Romans is typically the top of \x (or one of u, v, w, x, y). I call the capitals line H-line, because \M can overshoot. Overall size is sometimes called hp-size. But in reality g, j, y have longer descenders in most fonts. Capitals with accents are the highest. 

    If we take the top and bottom of each glyph relative to the baseline (= 0), we can calculate vertical proportions.

    Thus we get e.g. for BigCaslonM in 100 UPM (Units per EM):

    H-line: 72
    ascender line (h): 72
    descender line (p): -22
    hp-size: 94
    x-line: 47
    max top (Ö, Ü): 87
    min bottom (9): -26

    Vertical proportions:

    accent zone: 15 (87 - 72)
    ascender zone: 25 (72 - 47)
    minuscle zone: 47
    descender zone: 22

    Thus maximum height is 87 + 26 = 113 (13% more than EM-size), but fits into a default line advance of 1.2 x EM.
  • Is there any hard and fast rule or formula to set em in digital type design apart from 1000, 2048 or 4096? Why these numbers are not rounded like 2000 for 2048 or 4100 for 4096? Can I set any arbitrary no for em like 1250 or 1500 instead of 1000 likewise?
    Thank you!
  • jeremy tribbyjeremy tribby Posts: 212
    edited May 2023
    you can choose whatever you want as long as it doesnt exceed 16,384, as far as I am aware. some obscure environments will prefer 2048. there is some prior discussion on upper limits here
    I have used 2000 when I need higher fidelity than 1000, just because I can easily convert between 2000 and 1000 (what I normally use) in my head
  • Ruixi ZhangRuixi Zhang Posts: 29
    Why these numbers are not rounded like 2000 for 2048 or 4100 for 4096?

    2048 and 4096 may feel not being rounded, simply because you’re more familiar with base-10 counting; they are perfectly “rounded” in binary (base-2), as 2048=2¹¹ and 4096=2¹². Incidentally, 16,384=2¹⁴ is also a perfectly rounded binary integer. METAFONT internally works with UPM of 1,048,576=2²⁰, and our base-10 brains prefer to quote this number as one million.

  • Thomas PhinneyThomas Phinney Posts: 2,732
    edited May 2023
    Some folks at Microsoft believe that, for TrueType outlines at least, having the em square be a power of 2 (e.g. 1024, 2048, etc.) results in slightly better performance in scaling operations.

    Some folks at Adobe believe this was true for Motorola 68K CPUs but not for anything since.
  • Mark SimonsonMark Simonson Posts: 1,652
    I believe that's true, too. When Apple introduced TrueType, it had to be able to run acceptably on 8 MHz 68000 processors, such as those in the Macintosh Plus and SE.
  • John HudsonJohn Hudson Posts: 2,955
    There have been issues in some third-party PDF rendering with an assumption that CFF flavour OT fonts have a UPM of 1000—the old PS Type 1 standard—, which results in scaling errors. It has been some years since we encountered one of these.
  • Ruixi ZhangRuixi Zhang Posts: 29

    Going all the way down to machine-level binary arithmetic, a UPM that is a power of 2 indeed makes the scaling operation faster.

    Say, you have a glyph that is 967=(1,111,000,111)₂ units wide, and a UPM of 1024=(10,000,000,000)₂ units. Then “computing” the fraction 967⁄1024 involves simply shifting the binary point; that is, 967⁄1024=(0.1111000111)₂=0.944… No actual “division”, which can be costly back then, is required. Now, imagine applying this simplistic operation on all the coordinates within each glyph, then on all the glyphs within each line, then on all the lines within each page…, ergo the better performance.

  • edited May 2023
    > a UPM that is a power of 2 indeed makes the scaling operation faster.
    So a 1024 UPM would be better than 1000? Just wondering.
  • So a 1024 UPM would be better than 1000? Just wondering.

    Not anymore (I’d hope!), unless you’re designing for computers in the 1980s, or if you’re constrained by backward-compatibility issues or edge cases (such as the third-party PDF rendering mentioned by John). Computers today can perform divisions in fraction of a second just fine.

Sign In or Register to comment.