What were the first OpenType font releases? And when?

2»

Comments

  • Adam TwardochAdam Twardoch Posts: 507
    edited September 2015
    David, 

    OK, maybe I should have phrased it differently. Microsoft Windows NT 3.1 (1993) and Apple QuickDraw GX (1995) used Unicode.

    And the Apple engineers did design SFNT around 16-bit Unicode, which was great. But at Apple, those designs effectively were put into real, significant use almost a decade later. 

    Both Microsoft and Apple, and also IBM, did research and develop their first internal implementation in the very early 1990s. 

    But it took Microsoft only a few more years after NT 3.1 to bring Unicode to the mainstream adoption: Office 97 (1996) was a hugely popular product that enabled truly multilingual Unicode text authoring, even when running on non-Unicode Windows 95 — with correctly rendered text, hyphenation, spellchecking and localized user interfaces. 

    It was followed by Windows NT 4 (1996 — Unicode on the OS level), Windows 2000 (2000) and finally, to the widest masses, with Windows XP (2003). 

    Apple introduced GX to MacOS 7.x in 1995 but then removed portions of it in MacOS 8, struggled with tons of bugs, made it optional, never released any significant products that would use it, and generally buried it. It was cool pioneer work that you did with them around GX, but it remained “only” pioneer for a long time. It was only 2001-02, with Mac OS X, that saw true Unicode adoption on the Apple platform. 

    But Unicode was only part of the multilingual puzzle — the other was smart font technology.

    Again, it was Apple who designed and implemented a good character-glyph abstraction level in GX in 1993, and a theoretically working system, but it was Microsoft who actually commissioned fonts for a huge number of world’s writing systems, did comprehensive localization and internationalization of virtually all of their apps, actively pushed Unicode and eventually established it. 

    Microsoft shipped their first OS based on Unicode in 1993, Apple did so in 2001. 

    Apple “sketched” global text processing and smart fonts for non-Western typography, but Microsoft brought it to the world much earlier, made it global and actually made it work. 

  • Adam TwardochAdam Twardoch Posts: 507
    edited September 2015
    Apple did well from the very beginning with graphics. They supported Béziers, PostScript, and in OS X PDF, transparency, video. But what Microsoft did really well early on, far better than Apple, was language and text, on a global scale. (Apple did better later on with their “pixel spread” fuzzy boldish glyph rendering, wisely sparing their users the “subpixel disco ball”.)

    And may I say, I think Microsoft did a far better use of Matthew Carter’s talent (Verdana, Georgia, Tahoma) than Apple did (Skia). The world could survive without Skia, but is a much better place thanks to Verdana and Georgia. 

    BTW, I did not pay for this opinion, except with sleepness nights, but I won’t ask for a refund. :) 

    (But I’ll add that I am biased. When it comes to type, I’m far more a “text person” than a “graphics person”. I put function over form, and reading over looking, every single time.) 
  • Deleted AccountDeleted Account Posts: 739
    edited September 2015
    Oh you want a rephrase. Hu;). I'll rephrase too then.  

    I will start with Mr. Stephen Coles' question.  I believe that Mr. Cole's, perhaps you Adam, and certainly others, believe that Microsoft's original release of opentype was an operating system technology other app developers could use - this is false. Only ms apps could use OT. Having a defined standard, and allowing others to use your implementation are two entirely different things. 

    You know that, I'm sure, but that OS thing, and the definition of "product" build ambiguity into the original post. Adobe, obviously, does not and never will want OS level OT, any more than they wanted OS level att. Apple never wanted anything but OS level support for advanced typography. I don't care that much for one platform or another, but the philosophy of Apple was correct. 

    What I care about most, for the sake of users, and them actual type developers, is that the above game is over with its partial eclipse of the old app-by-app OT philosophy, by international needs and the web. 

    D. Lemon: app developers, like the ones that Yves stood up and UI-railed against at ATypI, are those the ones you want to give the choice of how to give Latin users access to registered OT features?

  • Only ms apps could use OT.
    Not sure how to parse that statement. MS rolled out OT support via two OS-level services: Uniscribe and OTLS, with associated APIs. Any application running on Windows had access to those, and in most cases the new text processing APIs were adopted pretty immediately, since they provided automatic handling of complex scripts, bidi layout, etc. to any Windows app. Companies like Adobe (and later Quark) opted not to use the OS-level services for OpenType Layout, mostly because they needed cross-platform compatible results so had to 'roll their own'.
  • David is dead on with the issues with GX.

    Of course, Apple decoupled the font features from line layout with AAT, but it seems like it was too late to get widespread adoption. Even though there are plenty of apps that do AAT now, there are few non-Apple fonts... and the “plenty of apps” does not include the biggest publishing and office apps.
  • David LemonDavid Lemon Posts: 6
    edited September 2015
    ...
    D. Lemon: app developers, like the ones that Yves stood up and UI-railed against at ATypI, are those the ones you want to give the choice of how to give Latin users access to registered OT features?

    Sorry if I wasn't clear before. I'm writing about what happened & why, not making an argument about what should've happened. Both Adobe & Quark wanted nothing to do with a layout model that didn't leave their app's in control.

    I think the language- and feature-support situation would be much better today if things had gone differently back then. But that's just a hypothetical; what was most critical was buy-in from the app's that publishers & graphic designers relied on. Sometimes major compromises (like PostScript using a graphics model that reinforced the aging American point system) are necessary for a new technology to reach critical mass.
  • Adam TwardochAdam Twardoch Posts: 507
    edited September 2015
    @David Lemon: Thank you, David. Similarly, my point in writing what I wrote was not to “assign blame” to someone. It’s obvious to me that in retrospect, we can all see pros and cons of various decisions that some of us made in the past, but I think it’d be unfair to say that *back then* some of us “should have made a different choice”. Back then, we simply didn’t have the information we have now, and yes, there are many speculative “what if” scenarios. The question is what we do now with the experience we have gathered over the years. 

    My retrospect is: 

    1. Apple made some very good design choices when they were designing the SFNT container. The 16 bit indexing is a bit of a problem now, but hell, they did it when Unicode was still thought of being 16 bit. TrueType was released by Apple in 1991, and Unicode did not come up with UTF-16 (the ugly hack with the surrogate pairs to address codepoints beyond the BMP) until 1996. Basically, the origins of Unicode are riddled with compromises that were developed once people realized that 64K is not enough. The whole BMP/SMP stuff, surrogate pairs etc. — it’s still a big nuisance today. But given what the people like Sampo thought and knew back in 1991, I think they could have done much worse than what they did with SFNT. 

    2. Similarly, Adobe did some great things with PostScript and later PDF, some very ambitious things, and some things that are still annoying us years later. But it’s obvious that they didn’t do it on purpose just to annoy us. :) I mean, when PDF came out, it was thought of some low-end thing to compete with the likes of Novell Envoy, and AFAIR, was RGB-only at first (maybe not, not sure anymore). The 20 or so different font subformats that PDF supports today were most certainly not created for fun — because they actually complicated the lives of people *at Adobe* today much more than anyone else’s. 

    3. The folks at Apple have done one more super-ambitious thing, and that was QuickDraw GX. TrueType GX, a very good design at the time, was really great, and it actually suffered from the fact that it was “embedded” into QuickDraw GX. It a sense, QuickDraw GX failed (for various reasons), and it pulled TrueType GX with it. The complicated “political” situation at Apple caused the technology to go dormant for a relatively long time, and it was salvaged out of the GX remnants and put into OS X as AAT much later. 

    4. Microsoft implemented Uniscribe and OTLS in Windows. Perhaps David B is right that initially, the Uniscribe API was kept private by Microsoft. But I think John may be more right because his seminal article “Windows Glyph Processing” ( http://www.microsoft.com/typography/developers/opentype/ ) is dated November 2000, which is just a few months after the release of Windows 2000 which first included Uniscribe. Microsoft did license OTLS for free to interested developers (I remember that I got a copy back then). 

    5. In retrospect, I think the choice to implement “everything” in fonts and have a “stupid” layout engine similar to GX/AAT would have been better. Maybe less “stupid” but more “one way of doing things”. Remember, this is hypothetical — but if Adobe had implemented GX/AAT as their layout system underpinnings, using their own code that would implement only the actual text layout portions of GX (or even initially just a subset thereof), rather than relying on Apple OS services, we might have been in a “cleaner” position today. With OpenType, we’re in the situation where every implementation of OpenType Layout is different, and effectively only implements a subset of the spec. Developing more complex OpenType fonts is a nightmare (which I’m sure @John Hudson will attest to) because things that work in Uniscribe don’t work in the Adobe composer, and those that work in the Adobe composer don’t work in CoreText, and those that work in CoreText don’t... etc. This is because OpenType does offset a lot of responsibility for the shaping to the layout library, and each layout library is different. With something like AAT and its simple state table model, it’s almost impossible to get this unwanted variety in behavior. 

    6. Adobe did truly great with releasing and licensing of the Adobe FDK for OpenType. The FEA syntax designed by Sairus Patel was simple enough for most to follow, and was pleasantly “gradual” — it allowed you to forget about languagesystems and lookups if you didn’t care about them for simple stuff, and then MakeOTF would do some more “magic”, or it allowed you to be more explicit. That, and the fact that FontLab was allowed to bundle the AFDKO library with FontLab 4, certainly contributed to the popularization of OpenType. But in a way, I think that the FEA syntax was a bit independent of OpenType. I can imagine that the simplified and somewhat abstracted approach to the FEA syntax could have been used to compile not just OTL but AAT tables as well, with gradual path towards the more explicit. But, well, it didn’t happen, and what happened, had happened. 

    So now, the question is where should we go next. This is pretty much the question for OpenType 2.0 or 3.0. Many of us have some experience, theoretical and practical, with both the models of AAT and OTL, and the practical implementation. There is less experience with AAT, so fewer caveats or gaps have been discovered, and there is much more experience with OTL. 

    The way I see it, the AAT vs. OTL difference at its core is like the difference between TrueType “instructions” and Type 1/CFF “hints”. The TrueType bytecode was thought to be “display instructions” that dictate exactly to a “stupid” renderer how each glyph should be rasterized, and the PostScript hints were “display hints” that “suggest” to a “smart” renderer which constraints to observe when doing the screen optimization. Similarly, AAT was “layout instructions” which dictate the behavior to a relatively stupid, fast layout engine. OTL was “layout hints” which merely suggest certain layout behavior to a potentially smart layout engine. 

    To me, it is quite consistent that Apple went with “instructions” on both the rendering level (TT) and the layout level (AAT), while Adobe went with “hints” on both the rendering (PS) and the layout level (OTL). This seems to be the sentiment that @David Lemon alluded to — Adobe preferred to have more control in both rendering and layout than the Apple solutions would have given them. 

    20 years later, we’re at the point where, I think, the general consensus is that we don’t strive for cross-platform pixel-perfect consistency when it comes to glyph rendering. So for rendering, the “hints” model has won. The obsession with controlling each pixel via the TT instructions has proved short-sighted once we’ve said goodbye to the monochrome rendering. Antialiasing has by necessity abandoned strict consistency, and since ClearType, the “instructions” have become “hints”. With high-DPI displays, the necessity for hinting/instructing became even less of a concern. 

    But what about line layout? Well, my impression is that here, the trend is just the opposite: not just type designers, but also users and possibly app implementers would be happier with more consistency when it comes to line layout. It seems that the “layout hints” model introduced with OpenType has helped the industry initially, but with the increasing complexity, it actually has created more and more problems on the way. 

    Why? Well, because it turns out that the question of how an outline glyph is pushed into pixels is a matter of “taste” and aesthetics, and people as a whole take variation for granted. People always knew the “same” letterforms printed via letterpress on thick paper would yield a different image than the “same” letterforms printed via offset on glossy paper, and yet a different one from those displayed on an iPhone Retina display. 

    But with line layout, the variation in results is much more likely to cross the line between “correct” and “incorrect”. It’s not a matter of taste anymore, not aesthetics, but of right-or-wrong, or orthography or orthotypography. It’s fine if the slash is a bit more or less blurry, but it’s not fine if, instead of the slash, you get a completely different glyph, or if some glyphs clash or are moved to a different place. 

    Differences in glyph rendering are like playing the same notes on different instruments. Differences in line layout (especially on the level of “word layout”) are like playing the wrong notes or being out of tune. :) 

    So when we move towards a replacement of the current OpenType model, I think we should look at more “layout instructions” rather than “layout hints”, and at more, rather than less, consistency at the word layout or even line layout level. Which means that we should look more at technologies such as AAT or DecoType ACE, where the fonts dictate more and the engine is more stupid.

    Or we could look at actual executable code (e.g. JavaScript) embedded within “font-like objects” which drives the layout, i.e. “paints” an allocated piece of canvas with vector or bitmap shapes, given a text input and some CSS-like styling declarations. Though I think the latter can only be considered within a much more controlled environment like HTML. 

    In a way, OpenType is like HTML+CSS — it’s very declarative, and in the end you hope for the best and often bang your head against the wall (forehead bumps are a known trademark of web developers, you know). AAT or DecoType ACE are more like “basic” PDF — quite predictable and very WYSIWYG, with the renderer being relatively stupid. And my JavaScript fonts idea is more like PostScript — potentially superbly powerful, but will only work if the recipient, i.e. the engine, is willing to “execute programs” rather than just “parsing data”. 

  • AAT is "more acceptable" to app developers because it doesn't take over all line layout, just glyph-level choices. I am dubious that a GX-like solution will be more acceptable to app developers the second-time around. (Although if it was intended to be cross-platform from the beginning, that would help.)
  • Adam, re. your #4:

    The Uniscribe APIs and related RichEdit components for internationalised text layout were described in an MSDN article in November 1998. I also have somewhere a copy of Multilingual Computing magazine from around the same period, whose lead article is devoted to Uniscribe text processing.
  • Adam,
    Which means that we should look more at technologies such as AAT or DecoType ACE, where the fonts dictate more and the engine is more stupid. 
    Not coincidentally, the Universal Shaping Engine layout model heads in this direction: within the USE generic cluster structure, some shaping is driven by new Unicode properties, while much of the rest of the shaping — including some aspects of reordering — are driven by the font lookups.
  • Deleted AccountDeleted Account Posts: 739
    edited September 2015
    My point about the differences in OS philosophies, between Apple and MS/Adobe is not so much a criticism as a historical observation that cannot be repeated. That is the judgement of history, not mine. The web didn't happen just because people thought is was cool, it was also because it floats above the fray of applications in business, that rightly or wrongly want to restrict 'standards' the needs of their apps. 

    I strive to get around that by turning every single little thing, in every single point of every glyph of every font into an instruction set, so I don't play the OT game anymore until it's time to decide which apps' OT I am to address for my user. Adam, and many others of you already know that instinctively by now. That the smart vs dumb font data to font software relationship is key, is variable, and has a long term future, if the past is any indication, of "all smart", "all dumb" and everything in between.

    For this to be faced and addressed by our little industry is unlikely, with the standards being dominated by owners of huge, unkempt libraries of fonts. They have zero or less inclination to make a smarter font format than OT 2.0 this decade. Apple, is disengaged from the process. You have all heard Adobe's pushback on OT UI, is for the apps. MS publishes fonts that have particular typographic performance in their apps and Google doesn't even turn 21 until 2018.



  • If I were reinventing OpenType from scratch, I would definitely use the "smart font, dumb engine" approach. Well, not a totally dumb engine, as I would make it richer in some ways, but one that just carries out what the font specifies. There are basically two motivations for this: first, to make layout/shaping faster (still relevant in mobile), and second, to make text rendering predictable and consistent across operating systems and applications. As John said, USE is a step in that direction, and if it had come much earlier, it would have saved lots of pain. I'm thinking in particular of Myanmar script, one of the last holdouts for the full adoption of Unicode in the sense Adam describes.

    In particular, I'd run a really fast finite state machine to convert the input Unicode text into a glyph sequence (similar to AAT and Graphite), then apply a separate pass to solve specific problems, such as stretching the Devanagari "i matra" to the correct width. That last could be done either through parameterization of the font, or by baking a set of variants with different widths into the font, and just picking the closest one. I feel this is a general problem; it's similar to adapting the "fi" ligature to different letter-spacing.

    Another potential upside is that, if carefully designed, you'd be able to do a better job with more calligraphic styles, both Nastaliq and Dear Sarah-like fonts for alphabetic scripts. Finite state machines are a powerful abstraction and can encode pretty rich behaviors.

    The downside is that you need a tool to compile the layout tables from some reasonable declarative representation. Most font designers are not going to want to write out FSM's by hand (I personally enjoy it, but I'm definitely not a typical designer).

    Fun stuff to think about, but I'm not sure we'll see anything like this happen. OpenType works pretty well.
  • What was Microsoft's issue with the way GX handled ellipses? You can tell us now, it was so long ago...
Sign In or Register to comment.