Neology: a type design experiment in readability

24

Comments

  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Nick, there’s a big irony here.

    You might want to believe that what scientists have been looking at is something else, but historically, explicit attention to legibility and readability in type design and typography — and the way we use these terms in typographic discourse — derive from science.

    The type community has picked up on this and acknowledged legibility and readability as ends. The type community has asserted its own ways of defining and assessing them. The type community has developed a critique of scientific attempts to operationalize legibility and readability and experimental efforts to measure them.

    This is as it should be.

  • Nick ShinnNick Shinn Posts: 2,131
    Irony is always present when science addresses culture and discovers, simplistically, what is tacit for practitioners.

    Letters should be legible in order to be deciphered, the more so the better, says research. A principle that founders have observed since the 15th century, as they have sought to design expressive, recognizable type shapes suitable to contemporary technics, and printers have sought to print them cleanly.

    “The tendency of the best typography has been and still should be in the path of simplicity, legibility, and orderly arrangement. ”—Theodore Lowe De Vinne, explicitly referring to legibility, 1902.

    Readability is important, says research. A principle well known to authors, publishers, printers, type designers and typographers—as Linotype pointedly noted by opening the text to its 1937 book promoting the Legibility Series of news fonts, The Legibility of Type, by reproducing a 1915 poem from the Linotype Bulletin, “Type Was Made to be Read”. That date was prior to the emergence of scientific investigation of the effect of typography on reading by Luckiesh etc.

    The Legibility of Type exploited recent reading research in the marketing of the Legibility Series, but the core of the program addressed eye fatigue via the age-old problem of matching type design to technology, and was in no way informed by scientific theory or discovery. Specifically, the Legibility designs deprecated (1) high contrast, because its fine details were too fragile for the newset, fastest presses, and (2) narrow opening to counters, because these were apt to clog up when not ideally printed by said presses. Of course, such improvements satisfied the “scientific” principle of legibility—which for the trade then encompassed what we now refer to as readability—that it is better for letters to be easy to recognize and read. (This was before disfluency theory).

    Admittedly, a distinction between legibility and the the scientifically-derived term readability has emerged for some typographers, where once legibility covered both. The naming of Font Bureau’s Readability Series (1997) demonstrated this evolution.

    But this is not for everyone. Sofie Beier avoided the scientifically-derived “readability” by titling her book, Reading Letters – Designing for Legibility. After all, “legere”, the Latin root of legibility, means “to read”.





  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Irony is indeed present when science addresses type culture and investigates simplistically what it sees to be tacit for practitioners.

    Legibility might come from legere, but I don't think we read letters; and it seems to me much of Sofie's experimental work is on affordance at the letter level rather than efficient processing in visual word recognition.
  • Nick ShinnNick Shinn Posts: 2,131
    edited October 2014
    I would imagine that the first readability researchers chose that word over legibility, because they were concerned with vocabulary and grammar, not the appearance of words, and felt that the concept of legibility often referred in general usage at the time to the quality of handwriting, which was off message.

    The printing trade, however, used “legibility” to cover everything from issues of inking and paper stock to the layout of pages.

    So, Luckiesh etc. decided on “readability” for their research on eye fatigue and typography, because the term had scientific provenance, and could be precisely defined at their discretion, removed from the overarching scope of the typographic term.

    Does that seem reasonable?

    Although I’ve used “readability” in my work on Neology, I’m becoming comfortable with the idea of one term, “legibility”, in the traditional, broader typographic sense.

    It seems a disservice to the grand old word legibility to limit it to the childish task of recognizing letters, while the upstart readability sees no more than eye fatigue.
  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Paterson and Tinker found the influence of typographical factors, such as size of type, measure, style of type, type of type, leading, paper stock and lighting on reading speed to be subtle, but not negligible, and consistent with information derived from eye movement studies. Luckiesh and Moss describe readability as an “integral effect.” Luckiesh and Moss also discovered that readability had a non-linear relation to visibility and reading speed — the curves of each of these have different shapes. I wouldn't call this "investigating simplistically what is tacit for practitioners," hence my variation on your irony quip. It just doesn't go far enough. I find these things illuminating, even though I don't think that Tinker and Paterson results all make sense, or that Tinker and Paterson sorted out legibility to any great depth, or that Luckiesh and Moss got to the bottom of readability. As I mentioned, Tinker and Paterson only looked at the knees of performance curves and used an artificial task which introduced a hidden confound.

    Early science concerned with type equated legibility with different things. Legros and Grant equated legibility — perhaps simplistically — with template overlap at a letter level; Paterson and Tinker equated it — perhaps in an overly ad hoc manner — with reading speed for words in a small cluster of sentences. Researchers went their separate ways without sufficient conceptual development of the term and without a “social contract” about it's use. Legibility was operationalized without sufficient effort to establish construct validity. Ole Lund called this easy operationalism.

    The term legibility comes from ordinary language and relates to lived experience. Its placement within discourse about typography (De Vinne) somewhat sharpened and specified its meaning. To serve as a basis for strict experimentation or for quantitative procedures the concepts of everyday language and craft-based reasoning must first be “ordered to” (Junius Flagg Brown) or “coordinated with” (Kurt Lewin) a series of “conditional genetic concepts” or “elements of construction” specific to a functional ontological domain — in the case of legibility and readability, perhaps the psychophysical dimensions or vectors of ‘affordance’ / ‘ease.’

    Tinker and Paterson didn't do this with the term legibility. They preferred the “black box” approach typical of “dustbowl empiricism."

    Luckiesh and Moss rejected the term legibility because extending the term legibility to cover what they were concerned with — expense of effort; ease; processing efficiency — would only have exacerbated the problem of the term legibilitiy's incipient lack of referential specificity. This was circa 1937. Theodore De Vinne used the term “readability” repeatedly in 1885, but the term readability didn't have a scientific provenance when Luckiesh and Moss appropriated it. The formalization of a branch of inquiry concerned with the measurement of cognitive issues related to semantics and syntax — framed as readability measurement — happened about a decade later. Luckiesh and Moss were concerned with perceptual processing issues relating to the sensorial dimensions of the physical text.

    It’s not that upstart readability sees no more than eye fatigue; it’s that a reliable difference in the increase in blink rate over time with different levels of boldness or types of type says something about how manipulating these factors affects ease or efficiency, and through them, the total reading experience.
  • Nick ShinnNick Shinn Posts: 2,131
    edited October 2014
    …the term readability didn't have a scientific provenance when Luckiesh and Moss appropriated it.
    Gray & Leary used the term “Readability” in their 1935 study “What Makes a Book Readable” (Table 1)

    They identified 228 elements that effect readability, and grouped them under four headings:
    1. Content: Propositions, Organization, Coherence
    2. Style: Semantic and Syntactic Elements
    3. Design: Typography, Format, Illustrations
    4. Structure: Chapters, Headings, Navigation

    It would appear that both cognitive and perceptual researchers were narrowing in towards operationalizing referential specificity in their separate fields (what I had previously described as “ownership”), and both settled on the fresh term readability.
  • Nick, if you want to understand the history of the term readability, read our damn paper. Peter and I spent months and months on it, and it lays out the history in detail, and evaluates the quality of the research and the robustness of the different concepts.
  • Nick ShinnNick Shinn Posts: 2,131
    Sorry Bill, I checked out the link you posted, but didn’t feel like shelling out $40 for the PDF. Surely it should be possible to have a discussion without specifying one’s adversaries reading list? Can I recommend some books you should buy to understand where I’m coming from? Have you bought my damn font yet? :-)
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    Nick, there was a philosopher of science called Jerzy Giedymin. He was educated and began his career in Soviet-dominated Poland, came under the influence of Karl Popper, the testability (and Open Society) guy, and spent the rest of his career in England. He pointed out that moves to avoid testability were “immunizing strategies” to block one’s ideas from the possibility of criticism or refutation. He characterized such strategies as impermissibly “dictatorial,” whereas science is democratic in taking as authoritative data that anyone can observe. Your adamant claim that readability is in principle not subject to measurement is, then, an immunizing strategy which in effect sets you up as the autocrat of readability—the “typocrat”.

    There are three problems with this. The first is that when you avoid measurement and refutation you also drain your ideas of empirical content—of the possibility of actually saying something interesting about reader experience. Second, it is actually pretty easy to measure readability, and the challenge is to measure it well or reliably. So saying that it is impossible to measure is silly. Thirdly, some scientists measuring different aspects of reader experience have actually got interesting results that no type practitioner had come up with before. Further, there are questions on which practitioners disagree, and which only future empirical testing can resolve.

    On the first problem, for a theory to be testable it needs to forbid some things. For example, a theory might say that type below a certain visual size is read more slowly. The theory would exclude the possibility of a typeface being read more quickly at that small size, compared to the speed at a larger size. And that you can check; it’s a real test, a possible refutation. The problem is that if you don’t exclude anything observable, you also don’t predict anything. So if your “readability” can’t be measured and so can’t be tested, then you can be the autocrat of readability, but you are a king in a kingdom of one, namely yourself. The advantage of science, when well done, is that it includes all of us.

    Second, if readability means anything, it concerns reader experience. That being so, you can always measure it by asking people what they think. There is a problem with such introspective reports. It is that they are not reliable, as we are subject to all kinds of biases and mistakes. And though expert craft opinion is a step better in some ways, it is also subject to bias and error, as we can see from the fact that experts disagree over whether, e.g. seriffed type is more readable in some settings. However, it is possible to deal with such problems. One way is by what the economists called “revealed preference.” You look not at what people say, but what they do—what formats sell, for example. But more than that you can also find specific measures such as reading speed or blink rate, and see how these relate to the introspective reports and to revealed preference. The gold standard for “validating a construct” such as ‘readability’ was given by Donald Campbell (historically later than Luckiesh’s work). He said that a concept should have both convergent and divergent validation. You need to have diverse tests which converge on the same variable as important, as well as tests which differentiate that variable from others.

    Luckiesh in fact did meet the gold standard with his tests on readability. He showed that overall muscular tension, weakening of the eye muscles from strain, blood pressure, and blink rate increase over time all went together—converged—on conditions that people routinely complain about as being fatiguing: extreme small type, low light, etc. Blink rate he found to be the most sensitive, so the used that measure more extensively. He also compared blink rate to other measures, such as two different measures of reading speed, and visibility. His 'visibility' was a measure of how far you could lower the contrast of foreground and background before you couldn’t identify words. He found that visibility, reading speed and blink rate gave divergent results. For example, some bolder type—he tested four weights of the slab serif Memphis—was more visible, equal in reading speed, but more fatiguing. So his extensive testing met the gold standard of Campbell.

    W. A. Dwiggins, certainly an expert type designer, at the time found Luckiesh’s work very interesting, but in need of further testing on different variables in addition to boldness.

    As to the actual results and their interest of Luckiesh's results, and others', I might post on that later. The article discusses this at length.
  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Gray & Leary used the term “Readability” in their 1935 study “What Makes a Book Readable”
    I wasn't aware of that!

    I wonder if Luckiesh and Moss knew the book. It’s not mentioned in Reading as a Visual Task.

    It's possible that Paterson and Tinker knew the book. Now I see their 1940 title How to Make Type Readable might be an allusion to the Gary and Leary title. I will have to check if they refer to it.

    It would make more sense of Tinker’s contention in a letter to Luckiesh in 1943 that "it seems to me we were the first ones to use the term readability in our book "How to Make Type Readable," and his claim in a 1944 paper that “in recent years writers have come more and more to talk about readability of print. Paterson and Tinker and Luckiesh and Moss have been the leaders in developing the concept of readability [ my emphasis ].” Luckiesh and Moss formally set off their usage of the term at some length in a 1939 paper on "The Visibility and Readability of Printed Matter," but were already turning to it in 1937. The 1939 Luckiesh and Moss paper does however appear in the bibliography of How to Make Type Readable.

    Tinker used the two term interchangeably. I thought Tinker was being devious and disingenuous. But the situation might be more complex. After about 1950 Tinker went back to using legibility as an inclusive term.
  • There seems to be a missing element here: a discussion of Glazed Eye Syndrome, which occurs when reading material is so boring, so onerous, so poorly written that one's eyes glaze over. It's blink rate all the way to tears. Is it not a factor in readability? Shouldn't the readers' interest in the written material be taken into account when studying their reading physiology? How can you separate it from the physical act? Of course, to do so would be to negate the value of the "scientific" approach or, at least, throw in variable that's too large to reckon with.

    Here's another thing to consider: Did you take into account that one may reach a point of diminishing returns in reading speed? That faster isn't necessarily better? I remember some studies of Speed Reading, which enjoyed a vogue in the 1960s and 1970s, that indicated a poorer-than-average comprehension rate amongst its devotees. If that's true, wouldn't good typography keep our reading speed in check (within a certain range)?

    Sometimes people simply ask the wrong questions. At a conference of noted thinkers held at Mohonk Lodge in 1949 or 1950, Trygve Lie, the first Secretary General of the United Nations gave a speech in which he said that universal education is the greatest force for equality. Richard Feynman stood up to refute the remark, saying that, in his observation, education was the most disequalizing of forces and the most disruptive, which was not to say it wasn't good or important, but that it was often misunderstood as a social force.

    As we all know, it's possible to overthink a problem.
  • Nick ShinnNick Shinn Posts: 2,131
    edited October 2014
    Oh dear Bill, another lecture on What Science Is.
    And this time I am a silly, undemocratic Typocrat!

    **

    I really don’t want to discuss readability from the scientific perspective, but keep getting drawn in, to refute it.

    Certainly, if readability = x, where x is measurable, then it is possible to measure readability.
    But what if readability ≠ x?
    That is my proposition, that reading is a complex cultural process, defined as the understanding of a text. And thus, in the post-modern sense, it is impossible to measure the ability of a text to be understood in a measurable way, because it has so many meanings, and ultimately the reader’s understanding is human, not mechanical. Especially in the consideration of extended, “immersive” texts.

    I think I demonstrated, conclusively, that a “wrong font” mixture of geo and grot is just as readable as either constituent, providing there is a harmonization of text color. I don’t think there is a need to verify this in a laboratory. I don’t think that would be remotely interesting, although Peter is welcome to hypothesize further isolation of variables.

    What interests me is why the mash-up, although of perfectly adequate “readability”, is so uninviting to the typographer putting together a layout. Is it because typographers are inherently conservative, and their work is informed by established genres? Is the Neology mash-up only “groovy” when very obvious in display settings?

    I’m inclined to think that this technique might be better developed, and more attractive (both visually and conceptually), in a serif style, with the variants parsing the idiosyncracies of letterpress printing.

    But I still hold out hope that someone, somewhere, will someday utilize the fruits of this experiment.
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    Scott-Martin, ever since the book The Art of Readable Writing by Rudolph Flesch became a best seller, in 1949, I think the term "readability" has been used primarily in the context of good writing, the art of rhetoric. However, under the influence of Luckiesh and Linotype, the idea of "readability" as ease of reading extended text has been a part of the lore in typography, at least in the English speaking world, since the late 40s. If you look at the evidence in our article, I think you will see that there is no doubt that Luckiesh is the inventor of the term used in this special sense.

    Luckiesh was well aware that how interesting and how well written a text is influences how readable it is in his special sense. That is because if we are interested, our adrenaline flows, and makes us feel less tired, among other reasons. Therefore he took care to control for this factor, in an effort to test for typographic factors only. He also controlled the typography quite well, with the aid of Linotype, which was supplying him with samples. Linotype, along with Monotype, had in the 1930s the most knowledgeable type people in the world.

    On the issue of diminishing returns of reading speed, Tinker was not very typographically sophisticated, and just identified legibility or readability (he used the words interchangeably at different phases of his career, as Peter indicated) with speed. Luckiesh, under the influence of Harry Gage at Linotype, was much more sophisticated, though still far short of the best at Linotype, like Dwiggins. Luckiesh was very critical of using reading speed as a broad indicator, because he thought it was relatively insensitive to typographic factors, compared to blink rate. The divergence of visibility, reading speed and readability as ease of reading is the most interesting result of Luckiesh. It does raise questions that Luckiesh never explored, and which were never taken up by other researchers. For example, if short lines increase ease of reading, but slow down overall speed (I don't know if this is so), what would be preferred by readers, as, say measured by revealed preference? By comprehension or proof reading accuracy?

    The truth that Peter and I got to in studying the Luckiesh-Tinker debate is that reader experience has more than one dimension, and that real progress is likely to involve comparing them. Unfortunately, no one since has yet taken up the promising blink rate tests, validated them further, and extended them.
  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Nick, I like what you are doing. I like the fact that it is experimental. I like the ethic of inquiry and creative norm-violation that seems to be impelling it. I think the premise is attractive. I just think you can do more with it 1) in terms of development; 2) in terms of testing.
  • Nick ShinnNick Shinn Posts: 2,131
    edited October 2014
    Thanks!

    1) I don’t think I can develop this typeface any more, they are pretty fixed once I publish them commercially. However, I might explore “simulating-the-subtle-vagaries-of-a-foundry-letterpress-typeface” using the same pseudo-random architecture. In that situation, the alternates would not be separate styles, but could be achieved by means such as baseline shift (“bouncing”) and weight variation. The main thing is to see what an idea looks like, once I’ve worked it out.

    2) I’m not sure I know what kind of testing you mean. I run a commercial type foundry, the only focus-group testing I do is put something on the market and see if anyone licences it. I would have to write a research proposal to get the funding to warrant the expense of incorporating scientific testing into the product development of a typeface, or have a commission from a special needs organization. Or put together a promo video for Kickstarter. But like I said, it’s a bit of a grind making all the alternates for a pseudo-random typeface, and I’d rather get on with developing my backlog of typeface ideas that don’t require so many glyphs, rather than get stuck in a rut. But maybe the baseline shift and weight variation idea would be relatively quick.
  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Nick, I think that for distribution purposes your demonstrations are adequate. You don’t need conclusive evidence for the font to be commercially viable or attractive, and you don't need to pretend you have it. Suggestive, arresting and prima facia convincing demonstrations go a long way. Legato got attention because it had a compelling premise and looked like it was living up to it’s promise of facilitating the perceptual binding of single letters into strong gestalts.

    I think further development and testing could be an interesting project for a master's level student in experimental psychology or a Reading-style student with the ability to manipulate the font and access to a lab, or access to statistical software, willing subjects and a program for running the tests. A place to start would be to compare fourier transforms.

    I would be happy to do this if I remember how and my fft permissions haven't expired.

    I think it would be huge to have conclusive proof that harmonization of text colour is key to readability and can absorb glyph variation, and then to figure out why, and if certain kinds of glyph variation are advantageous.
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    Nick, as I indicated in my first post in this thread, I agree with Peter that your approach here is interesting, and could be developed and tested further.

    I have made clear my objection to your claim that readability can't be tested, and don't want to belabor it. But I would like to give some context. As you may know, I did my PhD in history and philosophy of science, taught it, and am author of two books (one recently re-published by Routledge) and many articles in the field. And in the past 7 or 8 years I have also been involved in type design, as you know. So the relationship between art and science is important to me, and I would like it to be a healthy one. Putting up barriers between the two is to me very unhealthy. As I said, this kind of thing is an attempt to "immunize" ideas from criticism, where criticism, including by empirical testing, is a key to the growth of knowledge. And new insights in science can help art, and visa versa.

    I know "postmodernism" means a lot of different things, but in the sense of "truth being socially constructed" and by implication theories not being subject to independent testing (see here http://en.wikipedia.org/wiki/Postmodernism#History), it seems to me that that is the name of a sickness, not a sound philosophy. I agree with Goya—an artist!—that "the sleep of reason gives birth to monsters." http://en.wikipedia.org/wiki/The_Sleep_of_Reason_Produces_Monsters The recent example of Paul de Man, celebrated postmodernist who turns out to be a huge con man, is to me an illustration of the dangers of trying to segregate art from science. Once you immunize ideas from empirical testing, anything goes.
  • Nick ShinnNick Shinn Posts: 2,131
    Bill, I too consider this a moral issue.
    We are human beings, not machines.
    Neuroculture is just as much of a slippery slope as anti-rationalism.

    That is why we need a fence between science and typography, so they can be good neighbors.
  • Your adamant claim that readability is in principle not subject to measurement is, then, an immunizing strategy which in effect sets you up as the autocrat of readability—the “typocrat”.
    Amazing quote :)
  • I advocate trying to be clear enough in our ideas about type and reader experience that we can test them with repeatable experiments. That is what I advocate and you have been opposing. I don't advocate either that we are machines, or "neuroculture." That is a red herring.
  • Nick ShinnNick Shinn Posts: 2,131
    edited October 2014
    But your experiments reduce reading to the act of deciphering text, which is something a machine can do, and you are an evangelist for neuroculture, promoting its use in typography. Or do you believe that reading is not a cultural activity?
  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Do I not believe reading is a cultural activity? Do experiments that look at reaction times or blink rate actually, or in effect, reduce reading to the act of deciphering text?

    The socio-linguistic act of reading has a cultural dimension (because reading plays a formative role in shaping lives and facilitating or inhibiting civil society) and it has a cognitive dimension in that it involves meaning construction.

    It is foremost a socio-linguistic act because it centrally involves communication between people through the medium of language, and sense-following.

    The socio-linguistic act of reading has a sensorial dimension in that it involves the perceptual processing of punctuated strings of written words and sentences inscribed on parchment, printed on paper or made to appear on a screen. Actual deciphering only occurs if the identity of a word or the syntax of a phrase is unclear. Decoding is a misleading shorthand term for the complex feature-analytic processing and relational filtering that continuously occurs during visual word-recognition at the perceptual-processing front-end of reading.

    The socio-linguistic act or reading has a physical dimension in that it involves holding a book and turning pages, or sitting in front of a screen and scrolling with a mouse or keypad or a touch of the screen.

    The socio-linguistic act or reading also has a biological dimension in that muscular control, eye-movements and neural function is involved.

    In the simple act of reading a vast multi-layered network of functioning is involved. Science looks at the functional components of single layers of functioning in this multi-layered network. Linguistic scientist look at one layer, perceptual scientists at another, cognitive scientist at a third.

    Typographic manipulation affects some layers more than others.

    When typographers look at the readability of a page or column of text, I believe their proper focus is on the physical shapes of the letters, stroke contrast, construction, physical spacing, lineation, measurable column widths, margins — the lot — with an eye to how they affect the perceptual processing component(s) of the total act of reading. Expert craft practitioners have powerful craft-native ways of gauging that what they put in place doesn't disrupt smooth processing. Sometimes atmospheric values (Ovink) and gestural force (Henk Krijger) are also part of this.

    Experiments that look at the feature analytic processing and relational filtering at a neural level that go in to the rapid, automatic visual word-form resolution that anchors sense-following don’t reduce reading to the act of deciphering text. Studies which try to determine and tabulate the perceptual processing impacts of manipulating type-form micro-variables don't do this either.

    I think it would be good for typographers and type designers to know what happens at a neuro-physical level in the visual cortex and beyond when spacing is too wide or is irregular, stroke contrast across the alphabet is irregular. However, neuroscience doesn't have a good handle on this yet.
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    I didn't use the word "neoculture" and don't even know what it is. After you used the word, I looked it up on wikipedia, and still don't understand what it is. I am certainly not an evangelist for it.

    Furthermore, I don't think that science can explain everything, especially not everything human. In fact, in my work as a philosopher of science I offered what I thought was a pretty original and good argument in favor of the limits of scientific explanation, both in my first book and in my recent one. And I do not think that reading only involves deciphering text. It is far more than that. However, I do think you can study how humans decipher text without studying everything about reading or about culture.

    Do you think that because science cannot explain everything—on which agree—that it cannot or should not try to explain anything? If it were not possible to study one thing without studying everything, science wouldn't be possible in any field, whether psychology or physics. And when scientists study one thing, they aren't claiming to or attempting to reduce other things to it.

    For example, Luckiesh found by a quite a few different measures—including complaints—that very small type and type under low light were more fatiguing to read. Why is this invalid if he didn't also study culture? Do you need to throw away your glasses that help you read because the optics scientists didn't study culture?

    Luckiesh wasn't trying to study everything involved in reading, but only Reading as a Visual Task, to use the title of his book. He was interested in what the eye and the visual processing did, not the whole of the brain, the whole of humanity or the whole of human culture. He thought that the limited objective was a worthy and a feasible one. I do too.
  • I didn't use the word "neoculture"
    Neither did Nick, btw.
  • Ray LarabieRay Larabie Posts: 1,376
    Is audibility vs. listenability the same as legibility vs. readability?

    A song can be measured in many ways but can those measurements accurately calculate how awful Sting is? His songs are certainly audible. The song structure is sound. The production quality would probably pass the test. His pitch is probably bang on. But why do Sting's songs make me want to punch holes in drywall with my forehead while other people find it pleasant? Can we measure that sort of thing with anything besides a poll or by counting holes in drywall.
  • In Bases for Effective Reading (Minneapolis, 1966, pp.125,126) Miles A. Tinker notes that the ‘[…] subjective opinions of type designers and typographers as to legibility of letters prevailed throughout the nineteenth century and have carried much weight even up to the present day.’ According to Tinker type designers actually could (and should) profit from legibility research: ‘Even though type designers, printers, and publishers have at times achieved fairly satisfactory legibility by their approach, application of the results of research would lead to marked improvement.

    However, there is no proof in any form at all, nor is it likely that legibility research took place prior to the cutting of punches in the early days of typography. Jenson, Griffo, nor any of their Renaissance colleagues seem to have investigated the physiological structure of the human visual system in relation to type. The fact that light falling on the retina excites photoreceptors was most probably completely unknown to the early punchcutters. Later in the Renaissance more became known about the real functioning of the eye.

    Aldus Manutius was a teacher before he started printing and not a scientist, and the types by Griffo he applied were standardized, systematized and unitized approximations of Humanistic scripts. The quote from Tinker above is not untrue because of this fact, but Tinker and his colleagues are basically measuring harmonics, patterns, and dynamics that are the result of the production- and refinement-processes of type by the early punchcutters, who fixed the models.
    One could state that the legibility researchers seem to look for absolute values, a sort of Holy Grail of Legibility, and maybe there are only relative ones, which are the results of the applied systems and models in the graphemes that represent a certain script. If bold or sans-serifed versions of a typeface are considered less legible, than this is explainable from the point of view that the archetypal model was never developed with these derivatives in mind.

    Scientific researches are generally empiric and often make use of testing groups. This seems to be in line with Tommy Thompson’s statement in How to render roman letter forms (new York, 1946): ‘The eye remains the only machine that can test the design of letters for it is the only organ that can read them.’ But if there really are objective criteria for measuring legibility, a machine, i.e., software, should do the measurements in my opinion. This would exclude unclear subjective preferences, like taste and the perception of beauty, or at least would make it possible to map these factors, described by Ovink as ‘atmosphere value’, against generic models. In my research I try to map the systems and models involved in roman and italic type. Kernagic and LeMo are meant to become also type-analyzing tools, besides design tools at the end.
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    Jan, Nick used the word "neuroculture" in his second post of Oct 29, above. He introduced it, not me. I never encountered it before and still don't understand what it means.
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    Frank, it is conceivable that machines might one day be able to test a page for some aspects of readability, but that would only be after we have found out by testing human beings how the visual processing of human beings works in reading. All of the various meanings of "readability" share in common that they are about human reader experience. The question that the reading scientists have been trying to understand is how we humans read. You could only know what algorithm to use in a machine testing text after you've identified and confirmed that such an algorithm is at work in the human being.

    Tinker and Luckiesh had a huge quarrel over reading research methods and results, as we describe in our article. And we think that overall Tinker was a much poorer and less interesting researcher. Both Peter and I were astonished with how interesting Luckiesh is, given that most psychological research on reading is so lacking in insight—with Tinker being the prime example. However, Luckiesh and Tinker were in agreement in the sentiment you quote from Tinker. Luckiesh we think does actually have insights that typographers can use. However, he claimed too much for the implications of his own work for type design, which led to the end of his collaboration with Linotype.
  • Peter EnnesonPeter Enneson Posts: 31
    edited October 2014
    Bill, Nick used the term “neuroculture” not “neoculture.” That is the term you looked up.

    [ I see you corrected yourself after I first made this post ]
  • William BerksonWilliam Berkson Posts: 74
    edited October 2014
    Ray, I think that the analogy of audibility vs listenability doesn't quite work. The way you are using "listenability," it is mainly the quality of the music. Quality of writing is a factor in readability, but it also has a visual decoding side that is different. In hearing music it is linear, one moment after another. But in visual decoding you are seeing and processing many letters and a few words simultaneously—and in the midst of a mass of other letters and words. It is this parallel visual processing and its efficiency that is at question in studying the visual aspect of readability, so the parallels are not exact.
Sign In or Register to comment.