On the Origin of (Latin) Type Species

Refurbished blog on groundbreaking research of Darwinian importance for the understanding of type and typography, and for the parametrization of related design processes.

«134

Comments

  • Chris Lozos
    Chris Lozos Posts: 1,458
    Thanks, Frank!
  • Nick Shinn
    Nick Shinn Posts: 2,208
    This reminds me of the Hockney-Falco thesis.
    And Tim’s Vermeer.

    Describing your theory as “… groundbreaking … of Darwinian importance…” strikes me as a little too cocky, I’m afraid—perhaps you could call it the Blokland thesis?
  • Nick, I think Frank was referring to Henk Darwin, who lives down the street from him :)
  • Ecoutez bien! Darwinian or not, this is work worth everyone's attention. I don't fully agree with all of Frank's conclusions, at least not yet, but his line of inquiry is a very valuable one, in my opinion. We can make our lives easier and our products better with a knowledge of classical widths and sidebearings, which have been an intrinsic part of our reading culture for over five centuries. If you think we live in some new era in which this knowledge has no value, you don't know much. (How's that for a big claim?)

    I've been doing similar work in early Hebrew types (for biblical Hebrew, with full diacritics). I found that, without a full knowledge of what the old guys did, we are forever consigned to half-baked, overly complex, stop-gap measures that get us only halfway to our goals, at best. The old guys knew what they were doing and why. The solutions they found were amazingly elegant in their approach to both aesthetics and engineering. And their skill was often breathtaking.

    Here's yet another big claim: Robert Granjon was better than you, whoever you are.
  • When I started my PhD research in 2007 even in academic circles some people stated that my hypothesis was untrue. From a scientific point of view this was a somewhat unexpected statement, because usually such matters are not untrue until proven false. One connoisseur even went further by denying the existence of justified matrices for fixed mould-registers, although this practice is described literally by Fournier in his Manuel Typographique(1).

    Of course, the outcome of my research could well be that my hypothesis is wrong. So far my measurements, casting from 16th-century matrices, reproduction of peculiarities in spacing by reproducing the fitting of Renaissance type on very simple unit-arrangement systems, etc., seem to prove that I’m looking in the right direction. And even if I come up with strong evidence, it’s not impossible that my dissertation will be used as the basis for research which goal is to prove the opposite, i.e., that Renaissance punch cutters did it all on the eye, like Stanley Morison wanted us to believe(2). That’s perfectly fine, of course, and I even would like to encourage this.

    From a practical side, I think it’s interesting to see how more insight in the (presumed) standardizations and systematizations help my students when designing type from scratch. This PDF shows an example from the Expert class Type design of the Plantin Institute of Typography, Antwerp, but I will publish more in due course. At the moment one of the EcTd students is working on a Granjon revival, making use of the original matrices and punches in the collection of the Museum Plantin-Moretus plus applying outcomes from my research.

    Also a tool like Kernagic (pronounce ‘Kemagic’) will be helpful for designing on an organic grid, i.e., distilled from the design itself. Although the first edition of Kernagic is a bit clumsy still, I made some spacing tests, like this one (in this case in combination with a hinting test). Further programming will be done by Werner Lemberg (FreeType, ttfautohint, as you all know), with Dave Crossland (in his personal capacity) as driving force behind the project. We hope to present a new command-line tool at the Libre Graphics Meeting in Leipzig next month.

    1. Harry Carter, Fournier on typefounding (New York, 1973) p.89
    2. Stanley Morison, Pacioli’s classic Roman alphabet (New York, 1994) p.77

  • Looking forward to read more !
  • It is a very interesting discovery that they used a unit system in early type. Superb work. Of course Monotype also had a unit system. Was this also of cosmic significance?
  • Frank, I think there is something seriously wrong with not untrue until proven false as a basis for scientific perspective, and I hypothesize a seventh dimension to back it up.
  • Nick Shinn
    Nick Shinn Posts: 2,208
    PostScript has its own strictures, most notably the tyranny of alignment zones and stem widths.
  • John Hudson
    John Hudson Posts: 3,191
    What I insitinctively like about Frank's research is that it points to an early instance of spacing preceding shape in type design. This interests me because I've been thinking about a learning progression in which spacing would precede shape making, as a way of teaching type design. One of the things I observed in the Crafting Type workshop at Seattle last year is that a lot of the weakness of initial attempts at type design seemed to arise from beginning with making shapes in isolation. There seems no way around this if one begins with making shapes, because it isn't until one has made a few that one can start looking at them together and in various arrangements. And that is typically the point at which students start to think about spacing. But I'm interested in a model in which spacing comes first and provides the initial context for the shape creation.
  • Nick Shinn
    Nick Shinn Posts: 2,208
    When I was teaching, I started with the sidebearings of I, V and O in Futura.
  • John: ‘[…] a learning progression in which spacing would precede shape making, as a way of teaching type design.

    I developed the LetterModeller (LeMo) application for this purpose. The structure is related to what I distilled from Renaissance type (both Gothic type and the morphologically-related roman type). The idea is that the generic pattern is used as a template, basically not unlike (according to recent research) the more sophisticated Roman inscriptions were made: first (geometric) patterns to define the spacing were applied, and these patterns were subsequently followed by calligraphers using a brush in the way Catich described.

    The patterns of LeMo can be adjusted in several ways, and for instance directly traced with a broad nib, or letters can be drawn on the fixed structure, i.e., fitting. Text can be generated instantly. Subsequently the structure of the characters can be adapted to the spacing (like IMHO Jenson did, see below), or the spacing to the characters, as we are used to today. This PDF shows an example of this approach.

    image

    On Wikipedia one can read that Jenson’s ‘carefully modified’ serifs follow an ‘artful logic of asymmetry’*. The idea that the shapes of serifs are the result of ‘artful logic’ is perhaps obvious if one looks at the letters as separate pictures of things. But if one considers that letters are meant to form words and measurably centres the lower case n between side bearings, the weight, i.e., ‘blackness’, at the left side of the letter has to be reduced and at right side increased to optically balance the letter in its width. Respectively shortening and lengthening the serifs can do this. Also increasing the thickness of the right serif will help.

    *http://en.wikipedia.org/wiki/History_of_Western_typography
  • Chris Lozos
    Chris Lozos Posts: 1,458
    but the subtle correction of nuances to appear to be or correct for what exists is done by human eye/mind/hand and not with any complex grid. People did stuff that looked right to them with their intuition. All of the analysis and supposition came years later by the academic minds. Theories of how our ancestors did their work are fine and interesting reading but how does that really matter today in terms of what we do now? Also, who is to say that what was done hundreds of years ago is any better process than what we may do now? I doubt if the old original type designers did things the same way as each other then. I surely doubt that we modern era typographers do everything the same as our current contemporaries. All of this is a very nice academic exercise and worthy of study for its own merit but to say it is right or wrong has no bearing on what we do now.
  • John Hudson
    John Hudson Posts: 3,191
    Chris, remember that when we're talking about metal type we're talking about material objects and manufacturing processes. There are strong material pressures in favour of applied analysis over intuition. Simply put, it is much more efficient to standardise as many common widths as possible, so that one doesn't need to be constantly adjusting the mould.
    ...how does that really matter today in terms of what we do now?
    Using standardised widths sure as heck made my Javanese font easier to design and program GPOS.
  • Dave Crossland
    Dave Crossland Posts: 1,429
    Its worth noting that Øyvind Kolås is the author of https://github.com/hodefoting/kernagic/ and has his own view of the topic which is quite different from Frank's....
  • My experience with the topic itself; the origin of latin type is limited, though I consider the thesis to be possible. My own and Frank Bloklands understanding of how kernagics dynamic snap-gap method does what it does might differ a bit. But we seem to agree that it yields desirable results.

    I strongly support that such systematic constraints makes consistent design easier. Designs might trend towards following such systems; even if the designer didn't use such a particular system and just aimed for "visual consistency". In such cases; knowing and understanding the system would accelerate the designer.

    Immediately after Frank, at Libre Graphics Meeting in Leipzig, I am presenting a a font family with a production pipeline that imposes strong systematic constraints; and I am looking forward to comments at LGM on some of the ideas and observations from that project.
  • No cameras? No problem :)
  • Righto, I hope the obelisk is a visual aid too.
  • Of course! The obelisk tip is the reference point, in order to keep the eye in place.
  • Chris Lozos
    Chris Lozos Posts: 1,458
    edited March 2014
    John, I concede the value of using a grid which I might define as part of the design process for a particular typeface but I cannot see the need for me to use the same grid someone might have used hundreds of years ago for their own set of criteria. I can see the historic and academic value of researching such a thing but do not see why I might choose to apply their old solution to my new problem?
    I can certainly understand using a pixel grid at a given text size on screen to design web fonts and perhaps see some crossover with what a Linotype machine may have required years ago but it seems like more work than what I might get out of it for modern usage.
  • Chris: ‘[…] I cannot see the need for me to use the same grid someone might have used hundreds of years ago for their own set of criteria.

    Although your remark was addressed to John, I feel entitled to comment on it here. I think that if you don’t see any need in applying ‘the same grid’, you should not do that then. My mapping of the Renaissance patterns is meant to present more insight in the background, i.e., foundation of our profession and definitely not as a restrictive matter. It’s not heretic to deviate from any historic patterns, but if one does so, these patterns can give more insight in the level of deviation. The related software developments serve the same purpose.

    I’m lecturing type design for 27 years at the KABK and 19 years at the Plantin Society now, and I believe that I have never been very dogmatic. But if (former) students follow this topic, they should be the first to comment on this, I reckon.

    Are the Renaissance patterns always very relevant? No, I don’t think so. Conventions become proportionally less strict with the increase of the point size. If a typeface is meant for composing text, it is by definition related to the archetypes from Jenson, Griffo and Garamont and hence its anatomy and details, i.e., proportions, weight, contrast, contrast-flow and idiom, can be compared with and mapped against the archetypes. This implies that deviations from the patterns, such as compression of type, which for instance was applied in the seventeenth century in the Netherlands (‘goût Hollandais’) can be mapped (and artificially reproduced) too. And a typeface, which completely deviates from the anatomy of the archetypes, is not incorrect by definition, but should be judged against the new rules defined by the anatomy of the typeface itself.

    One (or better: the eye, or even better: the brain) can be conditioned with the Renaissance patterns and grids by for instance writing with the broad nib (I made a note on this practice here). But one does not necessarily need to handle a pen or brush to study and apply the effects of such a tool (and I’m stating this as calligrapher). For this reason I developed in the course of time the LetterModel as (geometric) representative of a formalized Humanistic minuscule. My students use this model for exploring the construction of the supported letters, for creating their own writing examples (instead of copying my hand) and for experimenting with the basics of typography. Therefore the emphasis is less on controlling pen and brush, and more on the research into the basic patterns of writing, and subsequently on these in typography as it is a related medium.
  • […] was applied in the seventeenth century in the Netherlands (‘goût Hollandais’) […]

    Oops, should be 18th-century, of course. ‘The “Romain du Roi” was strictly reserved to the imprimerie Royale; Fleischman’s Romans and Italics had Europe before them. The Paris trade, therefore, was bound to take notice of the “Goût Hollandais”.’*

    * Stanley Morison, Letter Forms (New York, 1968) p.35

  • What I insitinctively like about Frank's research is that it points to an early instance of spacing preceding shape in type design.

    John put it quite nicely here. A lot of typedesign suffer IMHO nowadays from the urge for "repetition", same spacing, same stemwidths etc. This is the part what I like about FeB's idea. That in designing a font, the rythm, color and spacing can and sometimes should overrule the isolated shapes and dimensions.
  • William: ‘Of course Monotype also had a unit system. Was this also of cosmic significance?

    The unitization of characters for the Monotype ‘hot-metal’ composing machines was a clear deviation from the 19th-century foundry practice, but obviously didn’t have a notable negative effect on the quality of the designs. Actually, the majority of the fonts produced at the Monotype Works under supervision of the engineer Frank Hinman Pierpoint and with the guidance of Stanley Morison in the first half of the 20th century have always been considered among the best.

    Pierpoint has been praised for his technical merits and Monotype’s Type Drawing Office (TDO) was obviously capable to satisfactory adapt the designs to the limited widths. As you know, even today the Monotype fonts show their limited widths in digital format (see image below). That this adaptation did not lead to distorted designs, could find its explanation in the fact that similar standardizations were made during the production of the Renaissance archetypes. These standardizations are part of the genes of roman and italic type, because they were actually formed by this.

    image

    A big difference with the Renaissance standardization practice was that Monotype’s 18-unit arrangement system was applied after the type design was made. A layout was chosen that would require minimal adaptations of the design. The units came not from inside like in case of the ‘cadence-units’ I distilled from the archetypes, i.e., they did not have direct relationship to the proportions and positioning of stems and curves. Of course, the proportions of the M or W are related to those of the other characters, but dividing these into 18 units doesn’t by definition provide a unit-arrangement that fits the stem-interval.

    Monotype’s TDO went to considerable lengths to adjust the typefaces from the always highly critical Jan van Krimpen . But this inevitably required compromises and in his Memorandum to Monotype Van Krimpen described the problems that accompanied this kind of production. He declared himself in favour of designing a typeface directly within a unit arrangement system, but with the proviso that no designer should try to make a design on an existing unit arrangement that does not correspond with his own particular rhythm.

    image

    It is interesting to see that Van Krimpen’s drawings for Haarlemmer, which were made on an existing unit arrangement system and keyboard layout (I'm looking for this particular arrangement still; especially the 9 units for the a are a bit unusual) to reduce costs, show clearly a simple rhythmic pattern (‘fence posting’) that I traced in Renaissance type. And the drawings also show that it was a highly complex task to design within an existing keyboard layout. The drawings for Spectrum show that JvK also designed this typeface on prefixed widths (please ignore the numbers in the illustration). He had more freedom here and did a better job than in case of Haarlemmer. He also used a tighter rhythm.

    image
  • Frank, it's interesting that you mention Haarlemmer. I always thought the narrow set width was deleterious to its appearance and I consider the (metal) Monotype release a failure for extended texts. It seems that vK was simply not amenable to making adjustments (Morison complained of the difficulty of working with him on such things). I consider the DTL version much better!

    Monotype matrices ranged from five to eighteen units of the em, which allowed for thirteen widths, though the number was fewer in fonts that needed more than one row of characters of a certain width, as was often the case. In Linotype, the widths were generally just seven. This sounds like a great restriction, but it was no more so than the early typefounders imposed on themselves. As others have pointed out, typefounding was an industrial process from the beginning, and it became evident quickly that interchangeability was a very great virtue. The criterion was whether one could still balance the internal spaces with the sidebearings. In most cases, the designs were adjusted so you could. This was an intrinsic part of the editing of the design, whether by the punch cutter or a production supervisor such as Pierpoint or Griffith.

    By the way, to address something that came up in an earlier post: The "adjustable mould" is a frequently misused term. It refers only to an early kind of mould that allowed adjustable heights, as adjustable widths were common to most kinds of moulds. Its popularity didn't last long, as even slight variations of height could cause a lot of trouble when locking up a forme. Reliability of width counted too, and for that reason typefounders saw a great advantage in keeping them limited and dedicating moulds to single widths. For example, fonts made for mathematics would generally have only two widths, one being one-half of the other, so everything could stack up evenly (punctuation would be a quarter width).

    Another kind of font that required limited and fully rational widths was Biblical Hebrew, the most complex of all typographic forms undertaken in early printing, which deploys two kinds of diacritics simultaneously, one for vocalization (the nikkudot, or "vowels," which number 11 to 13, depending on how you categorize them), the other for chanting (the ta'amim, or "tropes," which number 32 to 34). The diacritics appear both above and below the letters, with some placed within the body of certain letters, which were struck into the matrices (a drill was used for one of them) together with the letters as sorts of their own (there was no mitering). The typefounders did not have to cast so many diacritics as my numbers suggest, as nine of the pieces could be used for second purposes when composed upside down. The the type were based on an 8-unit square, with most letters occupying a 4x4 inner square, two letters 3x4, and four letters 2x4. The diacritics occupied either one- or two-unit widths, never more.

    Studying how the old guys figured out how to do their work is not an academic exercise. In the case of Biblical Hebrew, what one finds in the work of the masters is a guide to the most sensible and elegant way to think about character widths and mark placements. For example, most recent efforts at making fonts for Biblical Hebrew make the mistake of following the proportions of fonts intended for Modern Hebrew, which uses no diacritics, in which some characters are drawn too narrowly for the diacritics to be properly situated without disruptive lateral shifting. Another matter to consider is the rhythm of the letters and the regularity of certain counters--an issue that looms as large in modern Latin type design as it did 500 years ago. (I'm speaking of course about text types.)
  • I have some interesting examples to post regarding the manufacture of early Hebrew type, but each time I try to post them, I get an error message. I'm on Mac OSX, and have a number of browsers (Safari, Firefox, Chrome). Can anyone help me?
  • William Berkson
    William Berkson Posts: 74
    edited March 2014
    Frank, fascinating stuff on van Krimpen's wrestling with the Monotype units. Is the letter you reference published on line anywhere?

    The reason for my sarcasm on 'Darwinian' is that I think only one advance in type is on that scale of importance, and that is the invention of mass production printing with pre-produced letters, by Guttenberg. I think there is much to learn from your work, but Darwinian, no.

    There is another context that is important here, and that is the demands of the eye and brain. I think that regularity of writing and even color of text enhances readability, and this was already well established by scribes, in many different scripts, not only latin. So I would say that what was going on is that the early printers of roman type were able to simplify their production process with units because the demands of the eye had already resulted in regularized hands of scribes. Also I doubt that absolute regularity of widths is important. It may even be that approximate regularity is better than absolute regularity, but I don't know. Do you think there is any perceptual reason why absolute regularity of widths would be better?

    Scott-Martin, the work on Biblical Hebrew is dizzyingly complicated, and I can see that they would absolutely have to have a unit system to manage. One of my pet theories is that Hebrew with the vowel points, nikud, is inherently less readable for native speakers. The nikud were, it seems, developed for non-native speakers, and still, except for specialized uses, native speakers rarely use them. So you can certainly make them look better, as you have been one of the leaders doing. But I think they are inherently problematic. What do you think on the readability of Hebrew with nikud?
  • Bill, these marks have been with us, in one form or another, for over 1200 years (some even much more), so it's difficult to think of them as not being fully acculturated. You see, there are no "native speakers" of biblical and liturgical Hebrew, at least not in the sense that Modern Hebrew is a national language of everyday discourse. The language of the Bible is very old, and the oldest parts are often quite archaic, like Homer's Greek. Language of liturgy is heavily from the Middle Ages (with many biblical quotations and paraphrases), and from very far flung locales. The Aramaic of the Talmud, which is written without diacritics, was and still is a kind of rabbinic patois, native perhaps (and only perhaps) to a very small group.

    It's interesting to note that the nikkud (vowels) are used in some teaching of Modern Hebrew as a kind of "training wheels," as Hebrew has no native vowels, per se, other the letters that serve as mater lectionis. (I'm squeezing a lot of Latin in here!)
  • John Hudson
    John Hudson Posts: 3,191
    One of my pet theories is that Hebrew with the vowel points, nikud, is inherently less readable for native speakers.
    Bill, has anyone done tests on this hypothesis? Nadine Chahine told me about testing that had been done on the parallel Arabic case -- unvocalised vs. vocalised text -- and, unsurprisingly to me but perhaps to you, speed and comprehension both benefited from the addition of the vowel marks, despite the fact that these are normally not written in most text.
  • Frank, first let me offer my apologies for pushing this outside the realm of the Latin alphabet, but when people question the usefulness and relevance of historical methods, I feel compelled to give some examples in which they are indispensable. I believe they're no less so for Latin.

    Readers of biblical or even liturgical Hebrew, which uses only the vocalization vowels, fall into several groups. First are those who learned the language as children for religious reasons, and who repeat the words of lengthy texts day after day, or Sabbath after Sabbath, throughout their lives. The second are the children who are learning the language. The third are non-religious scholars who study biblical Hebrew, but rarely ever hear it, and in seldom if ever ever recite it out loud.

    I have found in my work designing large Jewish prayer books that the people in the first category are very comfortable with Hebrew composed in the old metal typographic tradition, with small diacritics, because for them the diacritics are a kind of mnemonic or diambiguator. It's generally believed that children need larger type and larger diacritics, though it stands to reason that the opposite is true, since most children have superb eyesight! The third group is problematic, because their word recognition is halting, and because they don't hear the chanting week after week, the aural expectation that gives rise to visual expectation isn't there. Synagogue worshippers who didn't learn as children, or who see the texts only occasionally, can fall into the same category.

    John H's SBL Hebrew is an example of a type made for the third category and it seems the client wanted the diacritics as large as possible, larger than they ever could have been made in the metal era, creating situations that force it outside of "best practices." The bad news is that no one ever wrote down the method in the old days, as it was the most costly of all composition and held "secret" by a few. Fournier, who knew a lot, didn't understand Hebrew, so what he wrote about its composition--which was repeated again and again for for two centuries--was incorrect. The good news is that some of the best old type survives completely intact, more than anywhere at the Plantin Moretus Museum, with justified matrices there to be studied. As we say: it's not rocket science, though you do need to know the language and how it's used.