Although I'm not one to push blind faith in scientific studies, this (from over two years ago, guys) should at least plant some fruitful doubt among the bouma-disbelievers:
https://gumc.georgetown.edu/news/After-Learning-New-Words-Brain-Sees-Them-as-Pictures
Comments
There are bouma disbelievers?
I think, on the contrary, some people actually overhype word shape without acknowledging the critique and expansions to those original proposed ideas (which by now date nearly 50 years back).
There are indeed all kinds of people. In type design not nearly enough who take bouma seriously, especially in the last few years, thanks to research I consider flawed.
Aoccdrnig to rsceearch at Cmarbidge Uinervtisy,
it deosn't mttaer in waht oredr the ltteers in a wrod aer,
the olny ipmoratnt tinhg is taht the frist and lsat ltteer
be in the rghit pclae. The rset can be a ttaol mses and
you can sitll raed it wouthit a porlbem. Tihs is bcuseae
the hmuan mnid deos not raed ervey ltteer by istelf,
but the wrod as a wohle.
It's funny that we can still read that mess.
And a bit puzzling too... how the hell do we do it?
Edit: Also, I think for me letters form a specific texture (I'm also a synesthete, so they have a color, but that's idiosyncratic). Each word's texture is not hurt by subtle switches in letter order.
Admins (“moderators”), please do your jobs and convert
/hideous-raw-urls-pasted-in-as-text/
into actual hyperlinks.André
— At least in the fovea it's normal to parallel-process individual letters into words even if they're jumbled. Although this is not as quick as bouma reading it's still in the immersive layer (so we don't have to consciously try).
— But also, a bouma being strongly defined by its silhouette, the first and last letters generally play a more significant role than the middle.
This might be because the latter's letter dislocations are too great (throwing off the parallel compilation) but it could instead/also be because the descender is moving too far, disrupting the bouma. Come to think of it, judicious dislocation of key glyphs (or even replacing certain ascending/descending letters with others) could be a key avenue for testing the [ir]relevance of boumas. Here it would be important to test the parafovea and not just the fovea, to mediate against the possible dominance of the parallel-letterwise layer.
Also note that especially in longer (typically compound) words there could be more than one bouma, or part of it could be a –more prominent– bouma; in "Cmarbidge" the [relative] intactness of "bridge" could be serving as a significant aid. My favorite example here is "readjust": because of their high frequencies and notable boumas, "read" and "just" pop out... and ruin the correct reading (which is why I prefer "re-adjust").
I posit that the ratio would drop (perhaps even reversing) with the following: leveraging of the parafovea, reading experience, and good typography. For example if you're flashing Avant Garde point blank in front of high-schoolers, boumas don't have a chance.
And a bit puzzling too... how the hell do we do it?
If you aren't pretty good at spelling much of it won't be intelligible. There are people that would be stuck on finishing it correctly, if ever.
It's that for some of us that trouble isn't worth the loss in prettiness. To be fair, it's a toss-up.
For example this diagram from Rumelhart & McClelland, 1982.
The real question is what it means that the brain might to some degree have a tendency to perceive words based on the images they form. Some type aims to disrupt, other to drown you in the comfy daze of centuries of convention and establishment. Both is fine, though, is it not?
I've been saying that since at least 2005. In fact the relevance of individual letters is trivially obvious; the problem is the Larsonists who discount the bouma component entirely.
> Both is fine, though, is it not?
The more you know what you're breaking, the more fine it is.
For example when something like Spectral gets released, you have to be able to grasp what's wrong with it ITO spacing.
There's a lot of neo-platonism, these days, and this is one of the cases: we learn stuff by expanding correlations, and that's why the bouma model seems more likely to take the cake.
It's way more elegant to have a supercluster (a mental word image) that allows variation (within reason), instead of the massive brain effort to decompose and constantly verify a word letter-by-letter.
But what suprises me more is why are we in this age where empirism is invalid as a method.
To be fair though I'm actually not sure what that last sentence means. Few people (and nobody in this thread AFAIK) consider empiricism pointless. But I for one do believe it's only half the picture. A favorite quote from Paul Klee: "Where intuition is combined with exact research it speeds up the process of research." I would go further and say that intuition can actually guide research (even though research might in fact end up countering it). Einstein instinctively felt he was right before he could formally prove it. The people who have doubted the parallel-letterwise model might not be Einsteins, but just because they don't have as much empirical backing doesn't make their intuition wrong. Yes, it's very easy to fool oneself in the absence of empiricism... but it's not much harder to do the same by leaning too heavily on empiricism. And now we're in a position where some notable empirical research (in fact since 2009, apparently) supports the existence of boumas, or at least whole-word reading. So unless one side can formally disprove the other side's empirical findings, intuition becomes the "tie-breaker" in terms of what one believes.
I would say the necessity of taking both empiricism and intuition seriously parallels the necessity of taking both parallel-letterwise and bouma reading seriously.
With this said, I wasn't building towers to empiricism, nor to data-driven conclusions. Doing one of these solely is an act of faith.
They should work together, because even if we have all the data and facts in the world, we still need to explain them. And if we have all the flawless reasoning possible, we still need verification.
Both methods are truncated, to some degree.
The idea that it is all interrelated - we see the individual letters, the bouma, the context of the words, the apparent sound values - makes perfect sense to me.
And with such a complex process, learning a new and different script means throwing most of it away until facility in the new script is acquired. So even if it would cure dyslexia, I don't think we will switch to an adaptation of the Korean writing system any time soon.
We have the empiricism of the marketplace: release a font and see if it becomes popular. Clearly, those which succeed are the most readable.
We can read it because the brain uses past experience to predict what's coming next.
It's not just the groupings of letters, it's the capitalization, the punctuation, the spaces, the need for an intelligible sentence to have a subject and a verb - the brain uses all of these and many other 'clues' as well to make meaning, come hell or high water. (If it can't make meaning, it will make stuff up.)
Brains use statistical probability to extract meaning, even if you are not consciously aware of it.
So, setting aside all the other clues your brain uses, think of it this way:
The average length of a word in English is 5 letters. If the first and last letters are all correct, that leaves, on average, only three letters that the brain has to unscramble to figure out the word the writer deliberately misspelled. Heck, even if you only speak Russian, you could get the meaning of that passage just by trial and error unscrambling using a Russian-English dictionary.
But if you do speak English, it's a piece of cake. The brain doesn't need to translate and can unscramble the letters to make meaning on the fly in light of other probabilities provided by the rules of grammar and the meaning of the words that have already been unscrambled prior to encountering the word currently under scrutiny and only if that word hasn't already been guessed at correctly by your brain without you having to unscramble any letters within it at all.
It's also in keeping with the observation that the eye jumps around and sees many, if not most words, in a passing glance or with peripheral vision only.
If word shape wasn't helpful in some way, I fail to see how that could happen.
And it raises a lot of interesting questions. If a word is, indeed, remembered as a 'picture', what happens when you change the font? Different font, different picture, right? How does the brain handle that?
Don't know.
If word shape were critical, it would be hard to explain how people read all-caps text, and why with experience their reading speed on all-caps text approaches that of mixed-case text.
Of course, it helps if one defines "word shape" and it matters whether one operationalizes that in a way that is distinct from shape of individual letters....
Never critical, but "merely" helpful. And I would posit more helpful than the difference between a highly readable font and an average one. The bottom line is that if boumas are indeed read, we must design with them in mind, which is delicate work; in contrast, designing sufficiently-legible letters is child's play.
BTW letters aren't critical either. :-) Which is why we miss typos so often. It's notable that people have reported switching to a hard-to-read font when proofreading to catch mistakes; presumably this is because boumas are inhibited and letters become more central.
I don't believe this is true of properly conducted testing, that leverages the parafovea.