From your experience, does the choice of "units per em" (e.g., 1000, 1024, 2048) make a difference in font functionality? Any reason you choose one size over another? I have generally used 1000, but I've always wondered about other scales.
Given that 18 units per em was good enough for Monotype, on the one hand no doubt 1000 units per em is enough, but on the other hand, a number of units that is a multiple of 18 would provide compatibility.
Given that 18 units per em was good enough for Monotype, on the one hand no doubt 1000 units per em is enough, but on the other hand, a number of units that is a multiple of 18 would provide compatibility.
Thanks, John! That's very interesting. You say "was good enough". When was that? Is it that way now? I've never heard of scaling by a multiple of 18.
The 18 unit em John mentions is from the metal type days, and it was strictly to do with the widths of the characters. The shapes of the characters were not constrained by any sort of unit grid the way they are in digital fonts.
At 2048 funits/em very small circles will no longer be circular, so increase the value to 4096 or 8192 funits/em.
Scripts with fine lines like Gabriola use 4096 funits/em.
Now this is getting interesting. Thank you for this. I presumed that the main reason for increasing the funits/em would be to support delicate features, but I never would have imagined going to 4096 or 8192 funits/em. Are there many fonts that do this? Is the only reason _not_ to that it will increase file size because you're using larger integers? Does that have that much of an impact? Thanks for feeding my curiosity.
For MM families is it better to use these higher numbers so interpolated instances are more precise and less affected by integer round-off error? Or does it matter? I guess it probably depends on the complexity of the design. On the other hand, perhaps it's better that the font editor not round off any transformation at all, except at the final TTF/OTF generation stage?
The 18 unit em John mentions is from the metal type days, and it was strictly to do with the widths of the characters. The shapes of the characters were not constrained by any sort of unit grid the way they are in digital fonts.
Unless you have a good reason to not use 1000, use 1000, since some software still assumes it.
An Em that's a power of two (typically 2048) is reputed to result in faster rasterization, but it's probably rarely relevant.
That's kind of what I've heard, but without specifics, which is partially why I asked in the first place. Do you know of specific (esp. common) software that still assumes it? Have there been any studies that validate the claim of faster rasterization when using an Em that's power of two?
The problem you can run into drawing is that the vector angles are also constrained by the unit density. This can cause undue distortion on details in the drawing. This may not be a problem for the average font in a situation but when you approach extremes of weight and detail, things can get ugly. Choose UPM on the targeted function of the face.
The problem you can run into drawing is that the vector angles are also constrained by the unit density. This can cause undue distortion on details in the drawing. This may not be a problem for the average font in a situation but when you approach extremes of weight and detail, things can get ugly. Choose UPM on the targeted function of the face.
That's precisely why I wondered if a higher density is more advantageous, particularly in the case of MM interpolation because the distortion (at least when the resultant points are rounded to the nearest integer location) would be lessened.
Unless you have a good reason to not use 1000, use 1000, since some software still assumes it.
The assumption is that fonts with PostScript outlines (Type 1 or OpenType CFF/.otf) will have a 1000-unit em, and fonts with TrueType outlines, a 2048-unit em. What is considered “standard” is dependent on the outline format.
The most recent issue I hit with a CFF OT font with a non-1000 UPM was six or seven years ago: non-Adobe PDF creation tools messing up spacing when embedding fonts. I've not see any similar issues more recently, but wouldn't be surprised if there is still some software out there making assumptions about CFF=1000, despite the OT spec clearly stating that the UPM value can be any whole number from 16 to 16384.
With regard to TTF, 2048 was a common convention, and never a standard in the same way that 1000 was for PS Type 1 fonts. What remains recommended, at least by Microsoft, is that TTF UPM is best set to a power of 2. This is because of optimisations in Microsoft's rasterisers that improve performance speeds in rendering if this is the case. Other companies have concluded that computing power is now generally so fast that these optimisations don't matter any more, but Microsoft stands by the recommendation.
When you use a typeface that is 2048 UPM, InDesign will round kerning and perhaps spacing units to 1000. Granted, this is a small amount and will rarely matter to most people. I surmise that keeping UPM to even thousands [1000, 2000, 4000] would keep clear of this problem. I am assuming that Microsoft is accommodating legacy systems in sticking with their power of 2 recommendation?
I am assuming that Microsoft is accommodating legacy systems in sticking with their power of 2 recommendation?
Not just legacy systems. The optimisations in the rasterisers are current for DirectWrite as well as the older GDI environments. Basically, Microsoft's view is that since the optimisations improve things — even if the effect of the improvements are smaller and smaller — why would you ditch the recommendation?
That said, their position has shifted. When we started making fonts for Microsoft in the late 1990s, a power of 2 UPM was a procurement requirement. Now it is still a recommendation, but no longer a requirement, and I have occasionally shipped them a font that, for one reason or another, had a non power of 2 UPM.
There was a problem in Adobe apps (maybe CS6) with UPM higher than ~4500. When you convert to outline it smashed all nodes higher than that down. Printing and pdf export was fine. I didn't test this recently.
Abraham Lee said: I presumed that the main reason for increasing the funits/em would be to support delicate features, but I never would have imagined going to 4096 or 8192 funits/em. Are there many fonts that do this? Is the only reason _not_ to that it will increase file size because you're using larger integers? Does that have that much of an impact? Thanks for feeding my curiosity.
I have only come across a few that used 8192, and as I said Gabriola uses 4096. One foundry that deserves to remain anonymous (but not for that reason) uses 4096 funits/em.
File size increase for my Guru Italic font with 2,893 glyphs (and 1,265 composites) is as follows: Hinted 2048 funits/em = 916 KB (938,232 bytes)
4096 funits/em = 940 KB (962,928 bytes)
8192 funits/em = 959 KB (982,232 bytes)
I think there is still a good reason for sticking to multiples of 2^ however fast modern computer chips may be, and in spite of Microsoft's recommendations being made in 2002. Even microseconds add up if many operations are performed when rendering text for the screen on zooming in or reflowing text.
For Mac OX, 1,000 funits/em is recommended. Apparently, there's a bug that affects PDF output for fonts with different funits/em values.
File size increase for my Guru Italic font with 2,893 glyphs (and 1,265 composites) is as follows: Hinted 2048 funits/em = 916 KB (938,232 bytes)
4096 funits/em = 940 KB (962,928 bytes)
8192 funits/em = 959 KB (982,232 bytes)
Are these different fonts using the same amount of points? I know for example that FontLab Studio 5 adds more off-curve points in a TrueType conversion the higher the upm is.
For Mac OX, 1,000 funits/em is recommended. Apparently, there's a bug that affects PDF output for fonts with different funits/em values.
That was in the first version of Mac OS X 10.10, and I believe has been fixed in 10.10.1 or 10.10.2. Oh, I see I was mentioned in that forum entry
Are these different fonts using the same amount of points? I know for example that FontLab Studio 5 adds more off-curve points in a TrueType conversion the higher the upm is.
For Mac OX, 1,000 funits/em is recommended. Apparently, there's a bug that affects PDF output for fonts with different funits/em values.
That was in the first version of Mac OS X 10.10, and I believe has been fixed in 10.10.1 or 10.10.2. Oh, I see I was mentioned in that forum entry
They were all exported from the same font project after changing the funits/em so the number of points are identical. This is what I do to change funits/em.
Select all composites
Invert selection
Copy
Change funits/em
Paste
This keeps the glyphs the same size. Hinting is reapplied on export. Metrics would need to be recalculated and composites would need to be recomposed, if you wanted to do this for real.
I would think that Asian fonts like Chinese, would be much easier to deal with at very high UPM. There are an awful lot more layers and strokes than compared to Latin.
Actually CJK fonts often use lower Ems (down to 256) to limit file size, due to the very numerous characters. What attenuates that is each character can use almost all the Em, since there are no extenders.
Regarding file size, I suppose that once you crossed the line where most point are more than 256 units apart, it doesn't really matter if you use a UPM of 2048, 4096, 8192, etc. The savings come from being able to use a single byte to store the point's coordinates, as opposed to a signed short, which is two bytes. A font with a 1024 UPM might have 50% of its coordinates stored as a short — say 25.000 points, which equals 50.000 bytes — but when "scaled" to 256 UPM they might all fit in a byte. That saved you 25Kb.
There's a silly experiment I did with a famous icon font that reduces file size in exactly that manner.
Oh, and get your rotten tomatoes ready: I think most webfonts would work perfectly well for screen with a low UPM like 256. The least that could be done is scaling the UPM as far down as possible by using the greatest common divisor for all coordinates.
I would think that Asian fonts like Chinese, would be much easier to deal with at very high UPM. There are an awful lot more layers and strokes than compared to Latin.
All type foundaries I know use a 1000×1000 square when drawing Kanjis. When building CJK typefaces for MS they sent me a lot of JSON files where coordinates are in float number! The target UPM is 2048, though. Most of the bytes in the file are instructions.
I think most webfonts would work perfectly well for screen with a low UPM like 256.
I think most (if not all) of Font Bureau’s RE fonts are on 512 upem, using similar reasoning. I believe that this approach grew out of David Berlow’s experience creating the Prelude fonts for Palm. He figured in that case that even a single glyph enlarged to the full size of the device would hardly reach a pixel resolution that benefited from more unit resolution (or file size).
In the case of the Slabo fonts, I tailored the UPM to each target px size, so that I would have a number that was evenly divisible by the px number to give me a grid that fit to the UPM.
It originally had a larger UPM, I think 1000, and set much smaller, in OS X 10.0.
It has been sized relative to the caps, including swashy ones, and thus had a tiny x-height compared to other fonts at a given point size. I guess this confused some people, or at least seemed counter-intuitive.
So Apple decided to resize the font (by making it 2.5x as large). I think the change was in OS X 10.2, IIRC. The easiest way to do this was to leave everything else the same, not touch anything else in the font or a single glyph, but to change the UPM value. A very simple adjustment from a technical standpoint—even an elegant way to make the change.
Yes, that's right. The first version of Zapfino for Apple was very small on the body, in order to accommodate the very tall ascenders of the taller style variants. This was done in consultation with Apple, and they shipped the font, but then they got a lot of complaints from people saying that the font was too small relative to other fonts. So Apple decided to change the UPM so that everything would scale larger.
The 18 unit em John mentions is from the metal type days, and it was strictly to do with the widths of the characters. The shapes of the characters were not constrained by any sort of unit grid the way they are in digital fonts.
Oops! Yes, I should have remembered that the 1000 or 2048 points into which the em is divided for a digital typeface are used for describing the shape of the character, not just the width of its bounding box - which is why a fine grid is needed to avoid it becoming an obtrusive constraint.
If one wants to reproduce the widths of a classic typeface precisely, it might make sense to choose a grid that is a multiple of the unit system used for the original face. Even foundry faces appear to have been designed to a unit of 1/4 of a point by some foundries; which makes sense, as without some commonality, how could lines be justified exactly?
Comments
At 2048 funits/em very small circles will no longer be circular, so increase the value to 4096 or 8192 funits/em.
Scripts with fine lines like Gabriola use 4096 funits/em.
For MM families is it better to use these higher numbers so interpolated instances are more precise and less affected by integer round-off error? Or does it matter? I guess it probably depends on the complexity of the design. On the other hand, perhaps it's better that the font editor not round off any transformation at all, except at the final TTF/OTF generation stage?
An Em that's a power of two (typically 2048) is reputed to result in faster rasterization, but it's probably rarely relevant.
When MS [co-]created TrueType they conceived of and confirmed the speed advantage.
With regard to TTF, 2048 was a common convention, and never a standard in the same way that 1000 was for PS Type 1 fonts. What remains recommended, at least by Microsoft, is that TTF UPM is best set to a power of 2. This is because of optimisations in Microsoft's rasterisers that improve performance speeds in rendering if this is the case. Other companies have concluded that computing power is now generally so fast that these optimisations don't matter any more, but Microsoft stands by the recommendation.
That said, their position has shifted. When we started making fonts for Microsoft in the late 1990s, a power of 2 UPM was a procurement requirement. Now it is still a recommendation, but no longer a requirement, and I have occasionally shipped them a font that, for one reason or another, had a non power of 2 UPM.
File size increase for my Guru Italic font with 2,893 glyphs (and 1,265 composites) is as follows:
Hinted
2048 funits/em = 916 KB (938,232 bytes)
4096 funits/em = 940 KB (962,928 bytes)
8192 funits/em = 959 KB (982,232 bytes)
Unhinted
2048 funits/em = 719 KB (736,496 bytes)
4096 funits/em = 737 KB (755,016 bytes)
8192 funits/em = 761 KB (779,384 bytes)
I think there is still a good reason for sticking to multiples of 2^ however fast modern computer chips may be, and in spite of Microsoft's recommendations being made in 2002. Even microseconds add up if many operations are performed when rendering text for the screen on zooming in or reflowing text.
For Mac OX, 1,000 funits/em is recommended. Apparently, there's a bug that affects PDF output for fonts with different funits/em values.
That was in the first version of Mac OS X 10.10, and I believe has been fixed in 10.10.1 or 10.10.2. Oh, I see I was mentioned in that forum entry
- Select all composites
- Invert selection
- Copy
- Change funits/em
- Paste
This keeps the glyphs the same size. Hinting is reapplied on export. Metrics would need to be recalculated and composites would need to be recomposed, if you wanted to do this for real.There's a silly experiment I did with a famous icon font that reduces file size in exactly that manner.
Oh, and get your rotten tomatoes ready: I think most webfonts would work perfectly well for screen with a low UPM like 256. The least that could be done is scaling the UPM as far down as possible by using the greatest common divisor for all coordinates.
When building CJK typefaces for MS they sent me a lot of JSON files where coordinates are in float number!
The target UPM is 2048, though. Most of the bytes in the file are instructions.
It has been sized relative to the caps, including swashy ones, and thus had a tiny x-height compared to other fonts at a given point size. I guess this confused some people, or at least seemed counter-intuitive.
So Apple decided to resize the font (by making it 2.5x as large). I think the change was in OS X 10.2, IIRC. The easiest way to do this was to leave everything else the same, not touch anything else in the font or a single glyph, but to change the UPM value. A very simple adjustment from a technical standpoint—even an elegant way to make the change.
If one wants to reproduce the widths of a classic typeface precisely, it might make sense to choose a grid that is a multiple of the unit system used for the original face. Even foundry faces appear to have been designed to a unit of 1/4 of a point by some foundries; which makes sense, as without some commonality, how could lines be justified exactly?