ChatGPT 4o has much better type design and typography output

2»

Comments

  • Nick Shinn
    Nick Shinn Posts: 2,216
    AI is a blossoming environmental disaster that sucks up energy and minerals, its hardware (soon to include every mobile device and server on the planet) quickly becoming e-waste. They will say that AI will solve this problem!

    Well, just suppose the AIs suggest that the solution is for the tech companies to start making mobile devices that last 100 years and can be repaired and 100% recycled, and if they can’t do that, just cancel the whole project. Fat chance. 
  • Dave Crossland
    Dave Crossland Posts: 1,431
    https://x.com/minchoi/status/1802702170581532875

    The lettering in this video is the "post fonts" authoring I've been considering 
  • John Hudson
    John Hudson Posts: 3,227
    edited June 17
    In this model, the text—limited as it is—exists in the prompt. I wonder how the post-fonts text model looks, potentially, in terms of encoding, storage, transmission, etc.?

    Does the post-font rendering happen at authoring origin, or at display time? If the latter, is it expected to be the same as it appeared to the author, or will it be different every time it is rendered, maybe using different AI engines or reacting to expanding machine learning? Will post-font texts only gradually assume a final form, where ‘final’ means the point at which the differences between one iteration and the next are too small to be considered significant? i.e. will Bringhurst’s ‘solid form of language’ become a kind of slowly setting concrete composed of the diminishing returns of increasingly self-referential prediction?

    Presuming the text is still encoded—somewhere—as characters, where does the shaping information reside? Is it standardised across AI text renderers? Will those renderers know what to do with formatting control characters, or will such minutiae be expected to live at the prompt level?
  • Vasil Stanev
    Vasil Stanev Posts: 775
    edited June 18
    I read somewhere on the Internet that AI might actually get worse over time because it feeds off its own production, which means bad production is not weeded out in time. I remember that some months ago I deliberately prompted ChatGPT to recommend me a book on a nonsensical topic - "Skydiving in ancient Byzantium". Not only did it invent a book title and author, it also made up some synopsis about the drivel. I am no expert on AI, but I think this prompt has since then made the overall level of ChatGPT drop a bit.

    I used ChatGPT 3.5 to translate a part of a book, it did very very well for the first two or three pages, then it decided it was in copyright and it can't do it. After I prompted it to translate more of the book, it sharply decided to do a very very bad auto-translate, worse than Google. I thought about bying a ChatGPT 4 subscription, but various materials on the Web suggest it may actually be WORSE than the free 3.5 version. 
  • Dave Crossland
    Dave Crossland Posts: 1,431
    edited June 18
    I expect that the post-fonts text model looks just like that video - pre-rendered.

    Need copy and paste, to work with pre-A.I. legacy software? A.I. OCR will turn it back into plain text, or semi-rich text, for you, in a jiffy. Need the text formatting adjusted to be in dark mode, increased font-size, lexend-ified wider, etc.? A.I. will recreate the document with the changes you prompted. Not satisfied? Iterate your prompt to be more specific.

    The idea the text is still encoded—somewhere—as characters, and all the other questions building on this assumption, is all "pre-A.I. thinking". 

    You might find this rather revolting, but "it is difficult to get a man to understand something, when his salary depends on his not understanding it."

  • John Hudson
    John Hudson Posts: 3,227
    edited June 19
    I never said I found it revolting, or difficult to understand. As for my salary, it is dependent on understanding, so the the questions I posed were genuine questions: just some of the ones that need to be considered in the context of transition from one text technology to another. And transitions in text technologies are sort of my thing.

    In terms of concerns about AI, I am much more worried about its massive energy requirements than I am about its ability to make some letters.
  • John Hudson
    John Hudson Posts: 3,227
    ... Iterate your prompt to be more specific.

    The idea the text is still encoded—somewhere—as characters, and all the other questions building on this assumption, is all "pre-A.I. thinking". 
    Unless we’re presuming all prompts are voice input, there’s a need for text input into the system. There is also the need for text as content, e.g. when I prompt the AI to make me a personalised valentines gift book of Persuasion in early 19th Century lady’s handwriting, the best source for that content is still stored character text, not a third generation OCR. Both these things imply something like character encoding and storage independent of rendering. I don’t see that AI necessarily changes the content/display bifurcation of digital text: it will doubtless have various kinds of impact on both, but the useful distinction is still there.

  • Ray Larabie
    Ray Larabie Posts: 1,435
    edited June 19

    @John Hudson If you consider a typical document workflow I might use in 2024, it could help you understand how this process might unfold in 2026. Presently, we lack a method to convert text into a print-ready typographic layout. For this case, let’s imagine that today, I’m tasked with transforming some existing text and a few photos into an appealing web page using ChatGPT. Here’s what I’d do. 

    I start with a Word document containing rough text that’s somewhat disorganized and unprofessional. I upload the document and ask ChatGPT to refine the text, making it clearer and more professional without losing its nuance. I also specify the intended audience for the text. Once the text is improved, I copy and paste it back into Word, proofread it, and make some additional edits. Then, I upload the edited document along with the photos. 

    Next, I ask ChatGPT to generate an attractive, professional webpage with an old-fashioned “fine typography” theme featuring off-white paper and old-style serif fonts. The photos should be appropriately placed within the document. The first paragraph should have a drop cap initial. The margins should be generous, and the text should be almost black. The CSS should be in the header, not a separate file, and the photos should be in a root folder named “photos.” Appropriate bullet lists and headings should be used, and the font should be Libre Baskerville from Google Fonts. The text should include proper smart quotes, long dashes, and other typographic niceties. Ideally, the result will be a nicely designed web page. I would have to upload the HTML file to my web host, make a folder for the photos and upload them, but most of the work would be done. 

    Now, let's envision a future scenario where ChatGPT can generate attractive print-ready layouts. We would use a similar prompt, but in this case, only the “idea” of Libre Baskerville would be needed. The AI would infer from the prompt that an old-fashioned look and fine typography are desired. It already knows what Libre Baskerville (and every font) looks like, so it can render the text without needing the actual font files. 

    I’m not certain this is how it will work, but with my experience using AI tools, this seems like the direction we're headed. Considering the advances in generative text, music, images, and video over the past few years, it feels plausible. It’s hard to imagine how incorporating fonts into this process wouldn’t complicate things further. Just as we use genres, specific artists, or particular works to generate content in other media, being able to accurately replicate text with a specific typeface seems like the next logical step.

  • John Hudson
    John Hudson Posts: 3,227
    edited June 19
    That’s pretty similar to what I envision, Ray. Text still exists as content, but the technology used to render the text changes. Or may change...

    It’s hard to imagine how incorporating fonts into this process wouldn’t complicate things further.
    Fonts are robust and efficient. I think AI generated text display is definitely going to be a thing, and it will improve, and it may even, eventually, be able to generate high quality styles of text display that are, if not original per se, less obviously derivative of particular typefaces or lettering already made by people. I don’t think all text is going to be displayed in this way because AI is an increadibly inefficient way to go about doing most things.

    The computing power and electricity supply* needed for AI are ludicrous in terms of the kinds of consumer uses to which it is being put. As reported in a recent CBC radio piece, it takes ten times more power to perform an AI search query versus a simple search engine query, even though the results are likely to be the same or, indeed, may be worse because AI makes shit up if it can’t find an accurate answer.

    * Microsoft, one of the key players in AI development, is reportedly investing in small modular nuclear reactor (SMR) technologies.

    Given the economic and environmental costs of AI, it makes sense to me to apply it to e.g. large data computation, scientific and medical research, and other fields where the benefit-to-cost is clear. It doesn’t make sense to apply it to things that are more efficiently done using simpler technologies.
  • John Butler
    John Butler Posts: 297
    At the moment my test question for free AI toys is “please list every Nick Cave song containing the words jangle or jangling.” No AI wankatron gives a complete or correct answer.
  • Rod S
    Rod S Posts: 1
    It’s hard to imagine how incorporating fonts into this process wouldn’t complicate things further.
    We can generate sounds, images and videos in the appropriate delivery format, why not fonts as well? 

    If a model generates a font behind the scenes then the generated content scales, prints, reflows on alternate screen sizes, looks good because it generated opsz, and displays efficiently and consistantly even on "legacy" surfaces.

    As a user I don't have to care about the details, I just tell it how I want my content to look.
  • Does the Monotype contract still give them the right to create derivative works that they own?
    As far as I've heard, the foundry contract from last year allows Monotype to create derivative work and provides no exit strategy for fonts that are still being used by clients. If they were to train AI on the 150,000 fonts they have in their library and through foundries, one could argue that an AI generated font is a derivative design and is covered by the current agreement. They would still need to pay royalties but not sure how that would be calculated. This is all hypothetical obviously.

    Still, it would be super flimsy if they do this. More likely that they can train AI on Monotype IP as their library is quite massive by now. To what end, that is the billion dollar question. They have already taken steps to pursue IP infringement for indie foundries (which is a massive revenue stream). You might want to look at foundries' other revenue streams and wonder what else will they want to take over?
  • Ray Larabie
    Ray Larabie Posts: 1,435
    We can generate sounds, images, and videos in the appropriate delivery format, so why not fonts as well?

    I don't think the image generator would require this kind of approach, based on how other types of generators work. AI image and video generators don't need 3D modeling to create scenes, and AI music generators don't rely on MIDI or modeled instruments to produce songs.

    There's likely no real danger of AI tools training on fonts because it's probably unnecessary. Let's assume the tool has access to every scanned magazine page available online. It's straightforward to identify which magazines are praised for their typography through online discussions. Since the AI “understands” what good typography should look like, it can generate text that appears aesthetically pleasing. Most readers, except those of us in the industry, don't concern themselves with the specific typeface. The lettering will suit the style and look good. Current tools can already use existing documents as a style guide.

    Consider the analogy (again…sorry!) of a sign painter, calligrapher, or scribe. They have models for how letterforms should look and personal techniques, but they don't need to purchase fonts. 

    So, what value do we bring in this near-future scenario? Our expertise is invaluable. We can distinguish between typography that simply looks good and typography that is truly transcendent. We know how to take bold risks and create thought-provoking juxtapositions. If you want to see the results of a lack of knowledge, look at the typical images people generate with AI. Their idea banks are so depleted that they're copying each other's prompts, resulting in images that blend elements of photography, Pixar, and digital art from the 2010s. Despite having the entire breadth of art history at their fingertips, all they can think to create are blue-haired girls in clichéd cyberpunk scenes. It will be the same with layout prompts, where people request “Rolling Stone Magazine style” or “Wired Magazine style” without any grace or nuance.

    For example, I know the effect Premier Shaded has on the human mind. It has a psychic impact on certain generations that I can't put into words. But your average shmoe has no appreciation for that sort of arcane knowledge and will be incapable of creating sublime works, no matter how many prompts they crib.
  • James Puckett
    James Puckett Posts: 1,998
    edited June 20
    What would be easier for AI to replace: type designers or Dave Crossland? ;)


  • Vasil Stanev
    Vasil Stanev Posts: 775
    I am not afraid of AI font production at all, because the devil's in the details. AI might work in certain very basic circumstances, but once you go into intricasies, you would need a programmer to have been aware in advance of the specific problem and have written code about it, which is impossible since there exist no time machines. Image AI is very bad e.g. at national costumes. I was recently approached by a  cliebt that needed a photorealistic AI image of a Philipino family from the 5th century. AI had produced some generic Maian-Aztecan-Hollywood costume, and on top of that it burdened that costume with unnecessary junk that would get you stuck on jungle vegetation immediately if it were real. I prompted AI to generate a photo of a Bulgarian old man from the 1800's. It gave me some generic Slav with a colorful bead neckless aroud his neck. No man here from that time period would even dream of wearing a necklace. It would be like if the US president went to work dressed as Buddhist monk with a boot of his head like Vermin Supreme.
    So I don't fear this technology at all.
  • John Hudson
    John Hudson Posts: 3,227
    edited June 21
    Vasil, you have just described what does frighten me about AI: not that it will be so good that it takes my job, but that it will remain chronically inaccurate and sloppy, and people just won’t care.
  • Dave Crossland
    Dave Crossland Posts: 1,431
    Hahaha James, I just spent the last 15 years seeking out only libre clones of the letraset catalog, I didn't green light a single original design or anyone taking any creative risks, sure, sure. 😂

    John, I'm talking about the creative industries demand for retail and custom typefaces, not operating system fonts. 100 years ago in NYC the streets were packed with horses, and today there's still some horses in central park if you want to ride in a carriage. You'll still have work to do on legacy font technology after the creative industries have found fonts to be obsolete for practical purposes. Don't worry about it. 
  • Vasil Stanev
    Vasil Stanev Posts: 775
    edited June 21
    John Hudson our both posts seem to me, perhaps, in reverse like the two stages of clients using AI. First they think they can cut corners, then they hit a snag and have to come back.
    Time will tell.  :)


  • Dave Crossland
    Dave Crossland Posts: 1,431
    (a) As industrial society continues to Jevon's Paradox its way to every greater energy consumption, it will continue to develop cheaper and more efficient electrical grid power. I'm pro nuclear and so I am not aware of why the inexorible increase in consumption is unsustainable. MS has plans to use SMR, well, and? 

    We might as well say, given the economic and environmental costs of the internal combustion engine, it makes sense to apply it to only large rail networks, and doesn’t make sense to apply it to things that are more efficiently done using simpler technologies. Get on a bicycle! 

    (b) I don't see the LLMs problem with porkies as relevant to diffusion models generating letterforms. 

    (c) Per McLuhan, "Plato regarded the advent of writing as pernicious. In the Phaedrus he tells us it would cause men to rely on their memories rather than their wits." Apparently cultural standards have been headed lower due to new media for some time...
  • Ray Larabie
    Ray Larabie Posts: 1,435

  • Matthijs Herzberg
    Matthijs Herzberg Posts: 154
    edited June 23
    @Dave Crossland a) Global energy consumption keeps rising yearly, and so the increase in nuclear and renewables is not actually reducing fossil fuel consumption. Global coal consumption has still not fallen (and even domestically, things are going far too slowly for <2c°), despite the Paris agreement's niceties. We do not need a useless technology adding to this burden. Perhaps Jevon's Paradox will have us catch up sometime in the next two decades, perhaps not, but we need to cut energy use yesterday—climate change doesn't wait around for economical hypotheses.
    b) Do you not think that cultural stereotypes can exist in typefaces? That AI would not mindlessly continue Eurocentric type design dogma? You of all people should know better.
    c) Yes, Plato didn't like writing, faster horses etc, therefore all technology should be accepted without further question.

  • Thomas Phinney
    Thomas Phinney Posts: 2,896
    edited June 23
    Perhaps Jevon's Paradox will have us catch up sometime in the next two decades, perhaps not, but we need to cut energy use yesterday—climate change doesn't wait around for economical hypotheses.
    Jevon’s Paradox says the opposite of what you seem to be thinking. It is an observation that when when we increase the efficiency with which we use a resource, we may well, in some cases, use more of it.

    In our current situation, it is an argument that increasing energy efficiency is not, by itself, always or necessarily effective in reducing energy usage; the result depends on the elasticity of demand for that particular thing or form of energy, and what happens to the price of that energy. Increased efficiency without increased price may well increase usage.

    One case we understand well, was the one Jevon had most in mind. He theorized about this when improved steam engines made for dramatically increased efficiency in usage of coal, saying this it might well not reduce England’s coal usage, but rather increase it. He was right—the usage of steam engines went up by way more than the efficiency increase.

    On the other hand, our use of, say, lighting does not have as high a degree of cost elasticity. Fluorescent lighting is about 5x as efficient as incandescent, and LED lighting is about 10x as efficient as incandescent; but the switch to such more-efficient forms of lighting has not resulted in such a massive increase in lighting usage. I expect it is up a little, but not nearly that much.

    Then there are more complex cases. Increased fuel efficiency for cars, if not accompanied by a corresponding increase in gas prices, might increase usage. Prior to electrification, that has not proved to be the case when cars have had to make other usability tradeoffs in order to improve fuel efficiency. If cars were magically 10x as fuel-efficient, it seems pretty certain people would not suddenly drive 10x as much. Definitely not that elastic.

    Jevon’s Paradox is not an argument against conserving energy. And it is not even an argument that that increasing efficiency of resource usage is bad or good! In the increased-usage situations, presumably that usage has benefits as well.  
  • @Thomas Phinney Thanks for clarifying. My point does remain: we should aim to reduce, or at least not considerably increase, power consumption while we try to catch up to renewable sources (or nuclear for that matter, which I am also for, but the usage of which still does not justify waste).
    In the increased-usage situations, presumably that usage has benefits as well.  
    In the case of AI, these benefits are only to the tech companies which force it upon us, not to humanity as a whole.
  • Ray Larabie
    Ray Larabie Posts: 1,435
    I’ve been diving into the latest AI developments and came across multimodal language models like Meta's CM3leon. Unlike ChatGPT, which only deals with text, these models can process and generate both text and images. Imagine the possibilities—handling text, fonts, images, and layouts separately but in a parallel, cohesive way. While they don’t handle fonts yet, the tech is evolving fast, and it might be worth keeping an eye on.
  • Igor Freiberger
    Igor Freiberger Posts: 279
    edited June 24
    Just a small repair, Ray. ChatGPT deals with images too. It (ChatGPT 4o) just created this illustration after my ask for a book cover in the style of W.A. Dwiggins. I'm not saying it's a good or bad result, just pointing it can generate and also receive images. (The book does not exist, but who knows?)


  • Ray Larabie
    Ray Larabie Posts: 1,435
    edited June 24
    @Igor Freiberger That's not quite the same. When you request an image, ChatGPT, generates a prompt for Dall-E and returns that image. That's still unimodal. Unimodal models focus on a single source of data, such as text or images. They process only one type of data. If you upload an image to ChatGPT and request a description, it has a module that returns a text description of the image for the model to process. Even if it's using modules for analyzing videos, or doing mathematics, it's all being returned to ChatGPT as text for interpretation. Multimodal models operate with multiple data sources, such as text, audio, or images. They can understand and generate content from diverse sources, combining text with visual elements, audio recordings, or videos…or maybe even fonts.

    Speaking of FontLab 8, I've been using ChatGPT to simplify some of the more complex parts of the FontLab documentation. I tried an experiment where I fed it a section of the FontLab 7 manual along with the changes for FontLab 8, and it generated an updated version. It kind of worked, but there were too many errors to be really useful. Overall, I find ChatGPT great for translating technical manuals into explanations that are easier for me to understand.

    If you're experimenting with that sort of thing, I recommend trying Claude.ai. I'm finding ChatGPT 4o to be worse than the previous version in some ways. With the new memory feature each GPT tends to forget its initial instructions and gets lazy about checking its reference documents.
  • Perhaps that’s because I’m old, and can afford to MO on the latest “tech”.
    Hm… No. My opinion is that it's just that you are intelligent. ;-)
  • John Savard
    John Savard Posts: 1,135

    In just a few years, AI will likely create impeccable layouts with flawless typography. The generated lettering will perfectly match the tone of the content, enhancing its meaning and impact.

    Throughout this evolution, the traditional concept of fonts will become obsolete. AI-driven typography won't rely on pre-designed fonts, marking a significant departure from the way we've approached type design for centuries.

    In other words: AI generated fonts are nothing to worry about because the timing will coincide with the end of the need for fonts.

    That is certainly a possibility. However, I would think that this would merely cause exactly the same problem for typeface designers as AI-generated typefaces matching those designed by professional type designers in quality.
    However, it would definitely give people more reason to make use of AI when preparing documents, thus making the problem worse.