...about something I had imagined that hardly anyone would agree with him about. (His views on the importance of cultural authenticity, I believe, are widely shared these days.)
And it isn't just anyone who is in agreement with him. It's Stanley Morison, no less.
Here's the quote, from his First Principles of Typography:
"It does no harm to print a Christmas card in black letter, but who nowadays would read a book in that type? I may believe, as I do, that black letter is in design more homogenous, more lively and more economic a type than the grey round roman we use, but I do not now expect people to read a book in it. Aldus' and Caslon's are both relatively feeble types, but they represent the forms accepted by the community; and the printer, as a servant of the community, must use them, or one of their variants. No printer should say, 'I am an artist, therefore I am not to be dictated to. I will create my own letter forms', for, in this humble job, no printer is an artist in this sense."
While he is not at all sanguine about the prospects of shifting the general preference, and in this respect he may differ from Hrant, he is in agreement that black letter is "better", in whatever sense one wishes to take that term.
I had thought it to be obvious that black letter was objectively less legible (and, indeed, far less legible), and one could demonstrate that by, for example, subjecting specimens of black letter and Roman type each to a two-dimensional Fourier transform, and observing the much greater high-frequency content in the former.
Such a discussion is only useful if we can separate mere instances of blackletter (which can very much suck) from what makes blackletter blackletter. Which is not easy to do.
My own epiphany concerning blackletter came when I bought a discard from the UCLA Research Library, about 20 years ago. It was an early 20th century German novel. Hundreds of pages. Set with unequivocal good taste and high craft. And it struck me that it must have been easy to read, contra our official-party-line "enlightened" prejudice. When I looked closer, I realized why, and it just makes sense.
Blackletter, in its spectrum of convention, incorporates two important advantages over whiteletter: because it can make curves into verticals it's horizontally more economical, something the Latin writing system needs help with, being out of harmony with the "geography" of our visual acuity; and it tolerates, even encourages, much more divergence, especially in its extenders, which are crucial for bouma decipherment. Yeah: http://typedrawers.com/discussion/2285/brain-sees-words-as-pictures
For example if you make your whiteletter "h" descend, you've pretty much killed it as a text font for most customers. In blackletter, it's supposed to have a descender.
Concerning the overly-homogenous texture contention, many instances of blackletter (in fact, entire styles of blackletter, such a textura) very much suffer from that. But all I have to point out is the fraktur "o", with its resplendent internal divergence, even post-modern hybridity. You can't do that in a mainstream whiteletter.
Now, all this does not mean we should use a traditional blackletter for much more than middle-school Participation certificates. For one thing, the caps are illegible! But we can, and should, try to assimilate the above advantages into our whiteletter spectrum of convention, broadening it, giving it more freedom, not least the freedom to be more readable.
Furthermore, Latin is too wide for optimal reading, and the blackletter approach can help mitigate that. When it comes to extenders, there's no reason blackletter can't have them long.
I'll try to dig up the book from the gara... I mean, archives :-) but in the meantime here's a cheesy old comparison using a scan from it:
(The background glyphs are from my Brutaal design.)
Do note that they're the same width there, but the blackletter has greater apparent size (plus has –more readable– looser spacing) so could be scaled narrower/smaller.
BTW at ATypI 2000, I gave a talk about all this. But back then nothing was recorded...
To be fair, most writing systems have this problem; Chinese (especially when set horizontally) and Hangul are notable exceptions. BTW vertical scripts (such as Mongolian) have it much worse because they go exactly against our acuity geometry (which has resulted from biological evolution).
BTW this is why the idea behind Hangulatin is so fascinating:
And from my original Alphabet Reform essay:
Although Anita did such an amazing job I've since become less certain of that...
Latin is easier to learn, but then limits you, both in terms of reading speed and cultural lyricism. Alphabets in general are better for business than culture...
BTW apparently @Dan Reynolds gave a talk about blackletter hybrids at last year's ATypI! Worth a watch for sure, and hopefully will serve to inspire future efforts.
I started a poll:
Chinese readers use MUCH shorter saccades (as do Japanese readers when reading kanji as opposed to katakana). It doesn’t seem to be an issue of visual arc and acuity, it appears to be information density and info-per-saccade.
Don’t get me wrong: I am sure we could have a typeface or writing system that exceeds a critical threshold, such that people could no longer get maximum info per saccade. But I am pretty sure whiteletter/antiqua is not there, in typical text usage.
I believe experiments with text size tend to bear this out. Reading speed only starts to drop when text gets a LOT bigger!
And shorter saccades might be an indication of non-immersion.
I think in proper readability testing (where boumas come out of hiding) speed drops shortly above typical text sizes.
I do not suggest that hybrid blackletter/roman typefaces be called Midollines. Instead, what I mentioned in the talk was that those kind of designs quickly were called Midollines in the mid and late 19th century German printing industries. Around 1900, that term was replaced by Neudeutsch, which like Midolline also referred to a number of specific typefaces. As a term, Neudeutsch was popular for even less time that Midolline had been. The Neudeutsch typefaces were only popular for about a decade, and the term held on for another two or three after that. In retrospect, I wish that they would have held onto Midolline as a term because: a) it would have provided for more continuity, b) it might have encouraged printers in 1900 to consider resurrecting some older display types from 1860, c) Midollines as types – but not as a term – were used internationally, whereas the Neudeutsch types were not really so much, and I think that a European-wide term is better than a national one, and d) because not suggesting thus would have ended my talk on a downer.
The best term to use for hybrids, in my opinion, is … well, hybrids. Because otherwise we have to invent a new term (which would be naff), or resurrect an old term, and there is no good reason to really resurrect one term more than any other forgotten ones. Also, every term that has fallen into disuse has some problems and baggage. Using Midolline as a catch-all term today is not ideal, as the term we are looking for should describe all kinds of work not necessary based on the artwork of one person. For example, it is even silly to continue to use terms like Garalde to describe the types cut by Haultin, or Didone to describe Walbaum’s romans. When doing so, one automatically is buying into a value judgement that Haultin and Walbaum’s work was not as good as Aldus & Garamond’s and Bodoni and the Didots, etc.
In the end, every hybrid typeface published in Germany in the 19th and 20th century remained an experiment. None of them were long-lasting successes, and none of them entered the “canon” of proper typefaces. Even Eckmann and Behrens, who without a doubt designed he most commercially successful hybrid typefaces, did not have their work enshrined in typeface classification systems (not that such a thing would be ideal anyway). I write this as to say: the text that people read most in the 21st and 22nd centuries is not going to look like a German hybrid from the 19th or the 20th.
A new word is only naff if it's unnecessary. For example taking a selfie might be naff, but without a word for it how could we concisely express that opinion? :-) All of us use words that were invented after we were born.
Of course they will look different, thankfully.
But as anybody (especially a revivalist) should admit, inspiration can morph something dead (actually, dormant) into something useful for the living. This is why I'm so glad you gave that talk, not because of historicism for its own sake.
I chose Times because it's ubiquitous (hence an easy reference point) if also because it has a reputation for high readability (which it frankly mostly deserves). Burgess did a good job... ;-Þ
Our screens must be doing a different job, because I'm seeing the opposite (at least at the bouma level)... Anyway they all fare too poorly, because they're not designed for that size.
— John A. Shedd
To put it more clearly: lack of convincing empiricism does not relieve us from having to take action with whatever justification is available.
That's Hangul's most powerful feature: you can slowly compile a syllable from its constituent alphabetic letters, but once proficient you can read it as a cluster.
The thing is, if boumas don't exist and we only read via foveal letterwise compilation, letterspacing is moot. And you still go faster by fitting longer words in the fovea!