The future of type

2

Comments

  • [Deleted User][Deleted User] Posts: 0
    edited August 2016
    The user and all related content has been deleted.
  • Maurice MeilleurMaurice Meilleur Posts: 58
    edited August 2016
    An idea I've been toying around with for a while is that the future of writing is typography without fonts. A font—metal, wood, film, magnetic bits—is really just a static set of discrete instructions for reproducing letterforms, and until recently those instructions were not subject to influence from the context in which they were applied. (That's not the same as saying that applying them in every context yielded identical outcomes, which is obviously false.) Even with the digital fonts we have now, all context can do is call up one set of rules for which outlines to use, and how to use them, rather than another set. And even with multiple master fonts, a designer still has to create sets of specific instructions for drawing letterforms at the extrema of all the variation axes (like weight and width and contrast), and any instance of interpolation gets saved as a font. (Experiments like Beowolf showed it is possible to leave a random element in digital instructions, but the instructions themselves didn't change with context.)

    The more recent examples of tech that Vassil listed represent a fundamental change (prefigured in a way by Knuth's Metafont already in the 1970s): the information barrier between context and instructions can disappear, so that context can directly and dynamically change the instructions for the outlines without their ever being 'frozen' in a single state as a font. That context could be input from the controls of an in-browserapp like Prototypo; it could be things like screen resolution, line length, linespacing, and other typographic variables; it could even be things like number of pageviews, updates and activity at the other end of a link, the text's proximity to images, that date of the last edit to a page, etc. The end of this road is type, and typography, without fonts.
  • Mark SimonsonMark Simonson Posts: 1,652
    Letterror created a font in 2003 that changed its appearance based on weather conditions: http://www.nytimes.com/2003/07/24/technology/is-it-about-to-rain-check-the-typeface.html
  • Nick ShinnNick Shinn Posts: 2,131
    Certainly, dynamically responsive typography could move beyond “frozen” fonts, but the exercise of human discretion will still be involved, moving up a meta-level to design how the responses will occur.

    And frozen fonts will still exist, because if all letter forms do is react automatically to a set of parameters—rather than embody the taste of a draftsperson, informed by the idiosyncracises of his or her lived life—they will be dead, soul-less cyphers with no personality and little appeal.

    Isn’t this the issue that stumped multiple masters?
    Isn’t this why we use our discretion to pick a point on the spectrum of interpolation to specify a particular weight that looks right to us, and so on for every variable parameter in a typeface?

    Also, designs wear out and tastes change. Predicting what people will like to see based on what they have liked in the past doesn’t work. As George Lois (among other) opined “You can’t test great creative”.



     
  • Maurice MeilleurMaurice Meilleur Posts: 58
    edited August 2016
    Mark: that's another great Letterror example, but remember that they drew everything themselves in advance—if I understand how Twin worked correctly, they wrote a piece of software that took in the parameters and generated a font from the existing (predrawn, static) repertoire of letterforms.

    I like contemplating the idea of typography without fonts because it's very instructive to think about the implications of such a future for software and operating systems, publishing online and in print, intellectual property, design judgment (where would it intervene?), and so on. But there are far too many people and firms with far too much invested (technologically, organizationally, economically, operationally) in fonts as they are now for things to change dramatically anytime soon. And it may be that fonts have affordances or advantages that we can't really grasp until and unless someone really makes a go of trying things differently.

    In any case, all technological changes layer the new over, or mix it in with, the old, and what that balance winds up looking like is impossible to say in advance. Aesthetics and judgment and informed discretion won't dominate the process—when have they ever, really?—but maybe if people invested in them work hard and have some luck, it won't rest entirely on accounting and profit margins, either. 
  • Jack JenningsJack Jennings Posts: 151
    edited August 2016
    To Mark's original question though: when do any of these ideas escape the "digital" realm? Does that mean literally considering how type functions in a world where quantum computing supersedes binary computing? Maybe there is no ramifications to the spirit of the thing, given—as others have suggested—the historical aspects tied to communicate that can't have the chair pulled out from beneath them.

    In the much closer term, incrementally I think we're overdue in rethinking type as existing within matrices. This feels like historical baggage that we should be able to move away from, if only to *begin* thinking about space and form in the abstract, and less about a particular physical object/embodiment. From a technological perspective, the shift from calligraphic to printed production was great for the industrial revolution, though is easily seen in the light of a step backwards in constricting the form of writing to an unit constrained by the limits of the technology. Why are will still thinking and working in this mode?
  • Nick ShinnNick Shinn Posts: 2,131
    OpenType has removed the one-glyph-per-character constraint, just as Unicode has established the distinction between characters and glyphs.
  • @Nick Shinn, I'm concretely thinking about drawing type as though still constrained by the metaphor of metal matrices, which is in alignment with your "bouncing" type example.
  • Nick ShinnNick Shinn Posts: 2,131


    I’ve been exploring the possibilities of alternates and contextuality within OpenType for a while now, and there’s a lot of neat stuff that can be done, but it is starting to feel a bit bogus—if all it does is klunkily mimic hand lettering or the vagaries of primitive media. And it’s a lot of dreary grunt work drawing and fitting all the alternates.

    Consider this lovely piece of hand lettering from 1958 (shortly before modernist typography took hold of record sleeve design). Now, I could conceivably create this kind of effect with alternates and pseudo-randomness (note the different HAWAIIs). But it would make so much more sense if it were done in a layout application, taking a basic font and adjusting the degree of variation of glyphs by slider (individually or in groups), while specifying/manipulating a curved baseline path, all the while with a kerning algorithm operating.

    At the moment, Illustrator’s “Optical” kerning takes no account of actual glyph proximity in a layout, assuming a straight baseline path. And contextuality reboots with every fresh line (except in some apps such as Apple’s Pages).

    In answer to Jack’s observation about metal matrices, there has to be a migration of typographic features from fonts to layout applications, to move towards the inherent fluidity of what we still refer to as written language and writing systems, and to better represent the organic quality of our existence.

  • edited August 2016
    It may be that digital is the final form of type. And that's kind of what I'm getting at. 
    The term you are looking for is computation. Computation means giving data to some kind of mechanism, give it precise definitions as to what to do with that, outputting the abstracted data, and repeating that process until, many layers of abstraction later, you get the result you want. All of the possibilities contemplated in this thread follow this process: graphical font editing (that is, manipulation of cubic polynomials in a purely analog fashion), parametric design, artificial intelligence, neural input, etc. It’s not a bad bet to expect the foreseeable future of type to involve some form of computation through and through.

    As to the endurance of digital, that will depend upon the pace of development and adoption of quantum computers by the IT industry. There’s debate whether quantum computers will be a whole different game or just a magnitude more capable than today’s digital computers, but behave fundamentally the same. So that covers a lot of possibilities from the development of technology we particle physics laymen can’t possibly imagine to adding degrees of sophistication to already developed methods of parametric design, automation and text rasterizing.

    Of course, if you are wondering about the impact on how we will produce and consume type, that says very little. As Ted Nelson puts in part 0 of it’s Computers for Cynics series:
    It’s all very personal. Everyone has passionate ideas and ideals, and want to write software to fulfill those ideas and ideals (…). 

    There are so many ideas to care about. And with ideas comes the politics of ideas. There are thousand of computer ideas, and so there are thousands of computer religions that these ideas people care very much about. Every faction wants you to think that they’re the wave of the future, and because there are no objective criteria — as in religion — there are thousands of sects and splinter groups. So, the ideas and methods to be used are determined how? By fighting, jocking, internal politics and maneuvering in projects and in companies. Everyone’s trying to get leverage and creative control.

    That’s the dynamics that will determine what tools, fonts, markets and jobs will be there for us in the future.

  • Mark SimonsonMark Simonson Posts: 1,652
    A (possibly) relevant development: A computer program that can replicate your handwriting. Her, here we come. I wonder how well this would work with type? Or, maybe I fear how well this would work with type.
  • Nick ShinnNick Shinn Posts: 2,131
    fontifier.com has been doing that, since at least 2004.
  • Vassil KatelievVassil Kateliev Posts: 56
    edited August 2016
    The topic is really getting very, very interesting! Nevertheless I feel the need to take things back to the fundamental basics! As far as we humans are concerned and our current (and semi-future) level of technology, they are only two possible ways of storing information - analogue and digital. If we could define glyph shape as information -  we could say that writing, calligraphy, metal type (wood type, intaglio and etc.) and phototypesetting are all analogue methods of storing type. We could also clearly state that any other way of representing typefaces and their fundamental elements (glyph shape, technical aspects and etc.) as numbers (that also may or may not require computation) will define them as digital. So type has evolved form it's analogue origins to a new form - digital - and of course as long it is represented by numbers, no matter their implementation (bits or bytes or q-bites or whatever) nor the technology used to compute them (computers, quantum- and bio-computers...) it will be digital. Digital type on its own has many different shades, but YES type will stay digital as long as we do not know any other new way of storing/representing it. 
  • Mark SimonsonMark Simonson Posts: 1,652
    fontifier.com has been doing that, since at least 2004.

    Fontifier just does automatic scanning and spacing and produces a very basic font, one glyph per letter. This is something very different. It imitates an individual's handwriting, with all its 
    idiosyncrasies, automatically. It's doing stuff that I don't think even OpenType is capable of, synthesizing stuff on the fly, not just by following prescribed rules or pseudorandomly combining canned shapes, but by analyzing patterns in the source.

    Anyway. Pretty interesting.
  • Cory MaylettCory Maylett Posts: 245
    edited August 2016
    Previous major advances in printing/typography were impossible to predict before the underlying principles and technologies that made them possible were discovered or invented. For example, head back several decades and the whole notion of digital anything had barely occurred to anyone, let alone how it might relate to type design.

    Going forward, there's really no way to predict unanticipated breakthroughs or their consequences. For now, it's reasonable to assume that advances in computing power and software will push things along in an evolutionary sort of way. What the next big world-changing breakthrough of paradigm-shifting importance might be is anyone's wild guess.
  • Chris LozosChris Lozos Posts: 1,458
    It is harder to think outside the box if the box has not been invented yet.
  • Ben BlomBen Blom Posts: 250
    I agree with Miles Newlyn. The technology for creating and using type design, will continue to change. The technology used to create type design, will continue to influence type design. The centuries old tradition of type design, will continue to influence later type design. Type design itself, will continue.
  • ...
    The new thing is very often something dismissed by experts as unsuited or inadequate for the task, or at least inferior.
    ...
    You should give Fontark a try.

    As a fully controlled fast-type-design tool and technology, in the short run it may make type design more accessible, not that everyone could design type, but good and talented type designers could create fonts much faster, this will make custom type design available in much shorter time frames and lower budgets, you could create a font for a certain purpose instead of choosing one, or trying to create a font that'll cover a wide range of use. This will boost up type design creativity and expressionism. 

    Not long ago Ray demonstrated a very cool one week long font design with Fotlab
    I'm thinking of making a font a day  week with Fontark sometime soon, just to  make this point.

    In the longer run a very flexible and variable single master font format could be developed.

    Hopefully, eventually telepathy will unessess type, it'll then serve mainly as art :)


  • Ray LarabieRay Larabie Posts: 1,376
    6 years? Time flies. I generated the following graphic using Latent Diffusion, a text-to-image model created by CompVis, trained on the LAION-400M dataset. I used various prompts, each containing the word "alphabet" Examples: Spiky goth alphabet, 1990s postmodern alphabet, cyberpunk stencil alphabet, LCD cowboy alphabet. The results don't seem that exciting until you compare 1 year of progress from DALL-E to DALL-E 2.

    https://nicksaraev.com/dall-e-2-the-death-of-art/




    In the first column near the middle, I used "Typodermic alphabet" as a prompt. It generated "TPORABET" using something similar to my Breamcatcher typeface but spaced it tighter than the original and merged the R into the A. "Machine" in the top row came out remarkably clean...look a bit like my Chinese Rocks but tidied up. Most of the results with circled letters had "typewriter" in the prompt, probably because their set had more photos of actual typewriters than typewritten text.

    What do you think? Scary? Exciting? Boring? What do you think technology will be like in another 6 years?
  • John SavardJohn Savard Posts: 1,088
    In another thread, I suggested that one fearsome possibility is that type might have no future; readily available high-bandwidth storage might mean that audio and video are used for all extended content, and text is only used for menus to select content.
    I don't think that's too likely any time soon, and since I can read much more quickly than people can talk, I would view it as a bad thing even for practical reasons.
    That we might use a form of type that is different from today's digital type in the future is indeed possible, but I think it would just be a different digital type format. Some technology superior to digital is something... that I can't even begin to imagine. But a new digital type format could end up being very different from the formats we use today, so different to almost be in a new fundamental type format.
    Although would anyone but technicians really care if a new digital type format did away with the Bezier curve?
  • The use of scripts, letters and type has actually not changed fundamentally since roughly 2000 B.C. (apart from the fact that the tools have changed, considerably).The only actual progress I could possibly dream of is some system mastering musical and mathematical notation complexity, one day. But I don’t see it coming soon.
  • Looking into the scientific papers fonts can be generated in near future. It still works, but as always it needs some years to the market. Choose a style and the software generates all glyphs.

    But maybe the use of written text will diminish. People are becoming more and more illiterate. They speak into Siri or other speech recognisers and the receiver can choose the final language, in letters or audio.
  • Alex VisiAlex Visi Posts: 185
    What do you think?
    The fact that AI can generate images of recognizable letters doesn’t really mean a thing. There are many ways to create images of letters, but only one way to make functional fonts. AI is good for generalizing, but fonts are the opposite – very exact outlines where 1 unit can make a difference.

    In another thread, I suggested that one fearsome possibility is that type might have no future; readily available high-bandwidth storage might mean that audio and video are used for all extended content, and text is only used for menus to select content.
    I doubt that’s possible even in the distant future. Audio and video are not really a replacement for the written language, it’s different. Nothing stops us from switching to voice messages only, but we don’t. Nothing stops us from having audio/video websites and social media, but we don’t (remember Clubhouse?)
  • Alex Visi said:

    The fact that AI can generate images of recognizable letters doesn’t really mean a thing. There are many ways to create images of letters, but only one way to make functional fonts. AI is good for generalizing, but fonts are the opposite – very exact outlines where 1 unit can make a difference.
    AI is also good for disambiguation. It can learn the differences better than humans.

    If one unit makes a difference at EM-size 2000 something is wrong. In the wild renderings at hp-size (line high) 10 pixels and below appear and are well legible. OCR needs 30-50 pixels for lowest error rates. I wonder what "functional font" means. The main function of a font is readability and legibility.
  • Alex Visi said:

    The fact that AI can generate images of recognizable letters doesn’t really mean a thing. There are many ways to create images of letters, but only one way to make functional fonts. AI is good for generalizing, but fonts are the opposite – very exact outlines where 1 unit can make a difference.
    AI is also good for disambiguation. It can learn the differences better than humans.

    If one unit makes a difference at EM-size 2000 something is wrong. In the wild renderings at hp-size (line high) 10 pixels and below appear and are well legible. OCR needs 30-50 pixels for lowest error rates. I wonder what "functional font" means. The main function of a font is readability and legibility.

    Yes that is what functional means roughly speaking. And I really doubt AI can magically create a classy legible font. Mainly because is a complex intelligent process. Not even a specialist can formulate the needed criteria. If it was easy we would already see examples.
  • Chris LozosChris Lozos Posts: 1,458
    We always assume that readability/legibility is the only measure of reading. Hundreds of years ago, we achieved this yet we now have millions of fonts available.  Clearly, typography is not just letter recognition.  There are other factors that escape the measure of number of regressions. Meaning, persuasion, and emotional response, are human factors well beyond our current ability to measure with any degree of certainty.  We may THINK we know something, but we don't yet. We once thought we knew the earth was flat.

  • Alex VisiAlex Visi Posts: 185
    edited April 2022
    AI is also good for disambiguation. It can learn the differences better than humans.
    How would you use the understanding differences for making type?
    If one unit makes a difference at EM-size 2000 something is wrong. In the wild renderings at hp-size (line high) 10 pixels and below appear and are well legible. OCR needs 30-50 pixels for lowest error rates. I wonder what "functional font" means. The main function of a font is readability and legibility.
    Counting pixels might be useful for text faces, but large type and high res screens is a new norm.
    If readability and legibility were the only criteria, we’d stop somewhere at Garamond. Type designers sell their sense of style, taste.
Sign In or Register to comment.