Brain Sees Words As Pictures

Although I'm not one to push blind faith in scientific studies, this (from over two years ago, guys) should at least plant some fruitful doubt among the bouma-disbelievers:
https://gumc.georgetown.edu/news/After-Learning-New-Words-Brain-Sees-Them-as-Pictures
«1

Comments

  • The actual article link.
    These findings have interesting implications for reading remediation in individuals with phonologic processing impairments because they suggest the possibility that these individuals might benefit from visual word learning strategies to circumvent the phonologic difficulties and directly train holistic visual word representations in the VWFA[visual word form area].

    There are bouma disbelievers? :sweat_smile:

    I think, on the contrary, some people actually overhype word shape without acknowledging the critique and expansions to those original proposed ideas (which by now date nearly 50 years back).

  • Thanks for the link.

    There are indeed all kinds of people. In type design not nearly enough who take bouma seriously, especially in the last few years, thanks to research I consider flawed.

  • QuoinDesignQuoinDesign Posts: 4
    edited July 2017
    While I can read Cmarbidge, Cgiarbmde doesn't work. Shape recognition is an approximate process, so as long as the shape doesn't change too much, we're fine. Change it a lot and we're lost. 

    Edit: Also, I think for me letters form a specific texture (I'm also a synesthete, so they have a color, but that's idiosyncratic). Each word's texture is not hurt by subtle switches in letter order.
  • Craig EliasonCraig Eliason Posts: 1,397
    It's funny that we can still read that mess.
    And a bit puzzling too... how the hell do we do it?
    There's some thoughtful analysis and bibliography if you scroll down on this page, put together by an actual "rsceearcher at Cmarbidge Uinervtisy."
  • joeclarkjoeclark Posts: 122

    Admins (“moderators”), please do your jobs and convert /hideous-raw-urls-pasted-in-as-text/ into actual hyperlinks.

  • Maybe, but I still value the transparency and convenience of visible URLs; ugly can be functionally beautiful. (While diversions are rarely so...)
  • I agree with Hrant here. I appreciate being able to immediately see that I'm being directed to georgetown.edu rather than (e.g.) crackpotterybarn.org.

    André
  • how the hell do we do it?
    I think it's a combination of two things:
    — At least in the fovea it's normal to parallel-process individual letters into words even if they're jumbled. Although this is not as quick as bouma reading it's still in the immersive layer (so we don't have to consciously try).
    — But also, a bouma being strongly defined by its silhouette, the first and last letters generally play a more significant role than the middle.

    While I can read Cmarbidge, Cgiarbmde doesn't work.
    This might be because the latter's letter dislocations are too great (throwing off the parallel compilation) but it could instead/also be because the descender is moving too far, disrupting the bouma. Come to think of it, judicious dislocation of key glyphs (or even replacing certain ascending/descending letters with others) could be a key avenue for testing the [ir]relevance of boumas. Here it would be important to test the parafovea and not just the fovea, to mediate against the possible dominance of the parallel-letterwise layer.

    Also note that especially in longer (typically compound) words there could be more than one bouma, or part of it could be a –more prominent– bouma; in "Cmarbidge" the [relative] intactness of "bridge" could be serving as a significant aid. My favorite example here is "readjust": because of their high frequencies and notable boumas, "read" and "just" pop out... and ruin the correct reading (which is why I prefer "re-adjust").
  • Thomas PhinneyThomas Phinney Posts: 2,730
    edited July 2017
    Studies have looked at the relative contribution of bottom-up word recognition from individual letters vs top-down word recognition, and found the ratio is something like 4:1, as I recall.
  • Interesting, I had not encountered actual numbers concerning that. Do you remember the source? And do you remember if it was concerning the fovea only or the entire field of vision?

    I posit that the ratio would drop (perhaps even reversing) with the following: leveraging of the parafovea, reading experience, and good typography. For example if you're flashing Avant Garde point blank in front of high-schoolers, boumas don't have a chance.
  • Craig EliasonCraig Eliason Posts: 1,397
    I agree with Hrant here. I appreciate being able to immediately see that I'm being directed to georgetown.edu rather than (e.g.) crackpotterybarn.org.

    André
    A hover and a glance at your browser's status bar will provide that information (more reliably in fact). 
  • It's funny that we can still read that mess.
    And a bit puzzling too... how the hell do we do it?

    If you aren't pretty good at spelling much of it won't be intelligible. There are people that would be stuck on finishing it correctly, if ever.

  • The user and all related content has been deleted.
  • Craig Eliason said:

    A hover and a glance at your browser's status bar will provide that information (more reliably in fact). 

    It's that for some of us that trouble isn't worth the loss in prettiness. To be fair, it's a toss-up.
  • Personally I was amazed to find that for example neural network theory has first been applied to the reading process some 35 years ago, and yet many practitioners of type design are still wondering if it is feature recognition, letter recognition, word recognition, etc... much more intuitive is the scientifically confirmed finding that they are all interrelated, and any good model factors in this interdependency; that Cmarbidge folklore just does not to it justice - it's a reverse-engineered complexity of reading - not bad per se, but what does it really prove, or help, for that matter...

    For example this diagram from Rumelhart & McClelland, 1982.

    The real question is what it means that the brain might to some degree have a tendency to perceive words based on the images they form. Some type aims to disrupt, other to drown you in the comfy daze of centuries of convention and establishment. Both is fine, though, is it not?
  • > they are all interrelated

    I've been saying that since at least 2005. In fact the relevance of individual letters is trivially obvious; the problem is the Larsonists who discount the bouma component entirely.

    Both is fine, though, is it not?

    The more you know what you're breaking, the more fine it is.

    For example when something like Spectral gets released, you have to be able to grasp what's wrong with it ITO spacing.
  • My favorite example here is "readjust": because of their high frequencies and notable boumas, "read" and "just" pop out... and ruin the correct reading (which is why I prefer "re-adjust").
    ...And now I can unsee it.
  • Nick ShinnNick Shinn Posts: 2,131
    Re. Crambideg: as Mark Twain once quipped, never trust a person who only knows one way to spell a word.
  • Personally I was amazed to find that for example neural network theory has first been applied to the reading process some 35 years ago, and yet many practitioners of type design are still wondering if it is feature recognition, letter recognition, word recognition, etc... much more intuitive is the scientifically confirmed finding that they are all interrelated, and any good model factors in this interdependency; that Cmarbidge folklore just does not to it justice - it's a reverse-engineered complexity of reading - not bad per se, but what does it really prove, or help, for that matter...

    For example this diagram from Rumelhart & McClelland, 1982.

    The real question is what it means that the brain might to some degree have a tendency to perceive words based on the images they form. Some type aims to disrupt, other to drown you in the comfy daze of centuries of convention and establishment. Both is fine, though, is it not?
    As Hrant suggested, it's pretty obvious that the process works in paralel - bouma and letter-decomposing at the same time.

    There's a lot of neo-platonism, these days, and this is one of the cases: we learn stuff by expanding correlations, and that's why the bouma model seems more likely to take the cake.

    It's way more elegant to have a supercluster (a mental word image) that allows variation (within reason), instead of the massive brain effort to decompose and constantly verify a word letter-by-letter.

    But what suprises me more is why are we in this age where empirism is invalid as a method.
  • You'll need to define which version of “empiricism” you mean before anybody can disagree.
    But you hit Disagree anyway?  :-)
    To be fair though I'm actually not sure what that last sentence means. Few people (and nobody in this thread AFAIK) consider empiricism pointless. But I for one do believe it's only half the picture. A favorite quote from Paul Klee: "Where intuition is combined with exact research it speeds up the process of research." I would go further and say that intuition can actually guide research (even though research might in fact end up countering it). Einstein instinctively felt he was right before he could formally prove it. The people who have doubted the parallel-letterwise model might not be Einsteins, but just because they don't have as much empirical backing doesn't make their intuition wrong. Yes, it's very easy to fool oneself in the absence of empiricism... but it's not much harder to do the same by leaning too heavily on empiricism.  And now we're in a position where some notable empirical research (in fact since 2009, apparently) supports the existence of boumas, or at least whole-word reading. So unless one side can formally disprove the other side's empirical findings, intuition becomes the "tie-breaker" in terms of what one believes.

    I would say the necessity of taking both empiricism and intuition seriously parallels the necessity of taking both parallel-letterwise and bouma reading seriously.
  • scannerlickerscannerlicker Posts: 18
    edited August 2017
    You'll need to define which version of “empiricism” you mean before anybody can disagree. But clearly personal and individual experience is rather limited compared to experimental method, and when one is talking about how the brain works, it is awfully easy to fool oneself and hard to do double-blind experiments on oneself. (Let alone have n much greater than 1.)


    Fair enough: I meant that one shouldn't draw immediate conclusions from unsuficient data, nor should one discard the intuition when something feels off. On the other hand, reason alone doesn't do much.

    With this said, I wasn't building towers to empiricism, nor to data-driven conclusions. Doing one of these solely is an act of faith.

    They should work together, because even if we have all the data and facts in the world, we still need to explain them. And if we have all the flawless reasoning possible, we still need verification.

    Both methods are truncated, to some degree.
  • John SavardJohn Savard Posts: 1,088

    While I can read Cmarbidge, Cgiarbmde doesn't work.
    This might be because the latter's letter dislocations are too great (throwing off the parallel compilation) but it could instead/also be because the descender is moving too far, disrupting the bouma.
    For myself, the first thing I think of is that in Cgiarbmde, the "g" has become a hard g, while in Cmarbidge, the sounds stay the same, but are re-ordered.

    The idea that it is all interrelated - we see the individual letters, the bouma, the context of the words, the apparent sound values - makes perfect sense to me.

    And with such a complex process, learning a new and different script means throwing most of it away until facility in the new script is acquired. So even if it would cure dyslexia, I don't think we will switch to an adaptation of the Korean writing system any time soon.
  • Nick ShinnNick Shinn Posts: 2,131
    But what suprises me more is why are we in this age where empirism is invalid as a method.

    We have the empiricism of the marketplace: release a font and see if it becomes popular. Clearly, those which succeed are the most readable.
  • Richard FinkRichard Fink Posts: 165
    edited August 2017

    Aoccdrnig to rsceearch at Cmarbidge Uinervtisy,
    it deosn't mttaer in waht oredr the ltteers in a wrod aer,
    the olny ipmoratnt tinhg is taht the frist and lsat ltteer
    be in the rghit pclae. The rset can be a ttaol mses and
    you can sitll raed it wouthit a porlbem. Tihs is bcuseae
    the hmuan mnid deos not raed ervey ltteer by istelf,
    but the wrod as a wohle.

    It's funny that we can still read that mess.
    And a bit puzzling too... how the hell do we do it?
    Your brain does autocorrection and the filling-in of missing content all the time.
    We can read it because the brain uses past experience to predict what's coming next.
    It's not just the groupings of letters, it's the capitalization, the punctuation, the spaces, the need for an intelligible sentence to have a subject and a verb - the brain uses all of these and many other 'clues' as well to make meaning, come hell or high water. (If it can't make meaning, it will make stuff up.)
    Brains use statistical probability to extract meaning, even if you are not consciously aware of it.
    So, setting aside all the other clues your brain uses, think of it this way:
    The average length of a word in English is 5 letters. If the first and last letters are all correct, that leaves, on average, only three letters that the brain has to unscramble to figure out the word the writer deliberately misspelled. Heck, even if you only speak Russian, you could get the meaning of that passage just by trial and error unscrambling using a Russian-English dictionary.
    But if you do speak English, it's a piece of cake. The brain doesn't need to translate and can unscramble the letters to make meaning on the fly in light of other probabilities provided by the rules of grammar and the meaning of the words that have already been unscrambled prior to encountering the word currently under scrutiny and only if that word hasn't already been guessed at correctly by your brain without you having to unscramble any letters within it at all.

  • Although I'm not one to push blind faith in scientific studies, this (from over two years ago, guys) should at least plant some fruitful doubt among the bouma-disbelievers:
    https://gumc.georgetown.edu/news/After-Learning-New-Words-Brain-Sees-Them-as-Pictures
    This was very interesting, thanks, Hrant.
    It's also in keeping with the observation that the eye jumps around and sees many, if not most words, in a passing glance or with peripheral vision only.
    If word shape wasn't helpful in some way, I fail to see how that could happen.

    And it raises a lot of interesting questions. If a word is, indeed, remembered as a 'picture', what happens when you change the font? Different font, different picture, right? How does the brain handle that?
    Don't know. 
  • If word shape wasn't helpful in some way, I fail to see how that could happen.

    If word shape were critical, it would be hard to explain how people read all-caps text, and why with experience their reading speed on all-caps text approaches that of mixed-case text.

    Of course, it helps if one defines "word shape" and it matters whether one operationalizes that in a way that is distinct from shape of individual letters....
  • Richard Fink said:
    If a word is, indeed, remembered as a 'picture', what happens when you change the font?
    Presumably the picture is fuzzy (figuratively, but perhaps even literally). A bouma would be consistent enough across –good enough– fonts to make things click (with a frequency proportional to the quality of the font). This is why I oppose things like the "f" in Fedra (Roman), which descends. And this is not just hypothetical:

    If word shape were critical, it would be hard to explain how people read all-caps text
    Never critical, but "merely" helpful. And I would posit more helpful than the difference between a highly readable font and an average one. The bottom line is that if boumas are indeed read, we must design with them in mind, which is delicate work; in contrast, designing sufficiently-legible letters is child's play.

    BTW letters aren't critical either. :-)  Which is why we miss typos so often. It's notable that people have reported switching to a hard-to-read font when proofreading to catch mistakes; presumably this is because boumas are inhibited and letters become more central. 

    with experience their reading speed on all-caps text approaches that of mixed-case text.
    I don't believe this is true of properly conducted testing, that leverages the parafovea.
  • What's wrong with the spacing of Spectral?
Sign In or Register to comment.