Artificial Intelligence generated letters
Fernando Díaz
Posts: 133
Hey guys!
I'm a beta tester for DALL·E & Mid Journey, they are both Artificial Intelligence companies. They have tools in which you can input an idea as text and it will graphically create images pixel by pixel. The results are stounding, I've seen a lot of "art" so I decided to test it for letters, mostly lettering... also tried more type-designed letterforms (without success, is not so smart.... yet).
Each one was generated in 20 seconds approx, with a text input describing the letter and background.
https://www.instagram.com/ferfolio.otf/
I'm doing this as an experiment to raise awareness of how much A.I. tech has grown, as it's not accessible to many people right now. As a designer and teacher, I feel that it will certainly bring some changes in our professional lives. Personally, I think it will be a great tool, but it raises several challenges and moral questions.
This tech will be available to everyone soon, and projecting this 5 years' time, maybe it will be inside Adobe apps, even generating vector graphics or animations (who knows).
Anyways... I just wanted to leave this post here to start a dialog,
I'm interested to know the opinion of the community.
Please, don't kill the messenger
I'm a beta tester for DALL·E & Mid Journey, they are both Artificial Intelligence companies. They have tools in which you can input an idea as text and it will graphically create images pixel by pixel. The results are stounding, I've seen a lot of "art" so I decided to test it for letters, mostly lettering... also tried more type-designed letterforms (without success, is not so smart.... yet).
Each one was generated in 20 seconds approx, with a text input describing the letter and background.
https://www.instagram.com/ferfolio.otf/
I'm doing this as an experiment to raise awareness of how much A.I. tech has grown, as it's not accessible to many people right now. As a designer and teacher, I feel that it will certainly bring some changes in our professional lives. Personally, I think it will be a great tool, but it raises several challenges and moral questions.
This tech will be available to everyone soon, and projecting this 5 years' time, maybe it will be inside Adobe apps, even generating vector graphics or animations (who knows).
Anyways... I just wanted to leave this post here to start a dialog,
I'm interested to know the opinion of the community.
Please, don't kill the messenger
Tagged:
7
Comments
-
This is something I've been toying with a lot this year. I'd been using Latent Diffusion for a few months and had lately begun Midjourney. I agree with your 5 year roadmap.
As for AI not being accessible, Latent Diffusion doesn't require an account. While the rendering isn't as pretty, it produces interesting results. In some ways it's better than Midjourney, at least for typographic "creativity". You can see samples from both tools in my Twitter feed.
There are advantages to changing the aspect ratio with these tools. When attempting to construct images several consecutive letters, a wide aspect ratio is more likely to produce cohesive output. You can observe how Midjourney develops a fuzzy pattern to represent text and gradually tightens it up if you monitor the generation process. However, if there are too many text lines, it has difficulty keeping them consistent. It would be interesting to observe the outcome from a system trained solely on high-quality typography.
AGI will arrive in about 6 years. Everything will change. I think it will affect the sales of display fonts long before text typefaces. I think in about two years you'll have a commercial product that can reliably design titles and logotypes in any style specified. As for whether this is good or not is irrelevant. The tools will become available no matter what we think so act accordingly.
Sort of related: over the last two weeks, I rewrote about 550 of my font descriptions using a neural network based text tool. I also used AI to help write the first and third paragraph of this post.
Oh, and those renders above turned out great. I'd love to try Dall-E 2 someday.1 -
Hey Rey,
I don't think that what we think is irrelevant... This tool will re-shape how we design and especially how we teach. So I feel like it's a good conversation to have, and be prepared for the future. Obviously banning the tool doesn't make sense, it would be like banning computers when they did everything by hand. But why would students want to learn how to draw, if they can make 50 professional art in a couple of minutes? I think many design subjects will have to change and focus on what the AI can't give you: the concept, the idea...
Rey, I didn't know you were playing as well, you have some interesting results!
Having tried Mid Journey, I feel Dall-e is much better, unfortunately, I don't have a place to recommend people it's still a closed beta.
PS: You have 550 fonts??? OMG!!! I know you had plenty, but seeing the number is crazy.1 -
Fucking Awesome!!!!1
-
By one hand, I believe AI will be a great tool, but just a tool.
By other hand, this is just the first stage of AI results. The better the learning and synthesis algorithms are, the more realistic and unpredictable the visual result will be.
The future is here in a second.
As Jonathan Hoefler said, 5 minutes later you are a completely different you.
AI – too.
1 -
I'm a beta tester for DALL·E & Mid Journey
So jealous...
1 -
I think many design subjects will have to change and focus on what the AI can't give you: the concept, the idea...
If you look at the feeds on Midjourney you can see some people copy/pasting the same “4K octane render in unity trending on artstation” or whatever and the results look sharp. But others are creating more interesting results using reference to specific artists, media, and film stocks. Perhaps the best AI artists will be those with an extensive knowledge of art/design history and can create the most compelling prompts, even if they can’t draw. Everyone is an art director in the future but not all good ones.4 -
The combination of improvements in AI learning and image processing is impressive, but fundamentally it is remixing of images that already exist and that the AI has learned to associate with text cues. Of course, plenty of what human artists and designers make is also remixing of existing ideas and sources, and AI, through access to vastly more sources and speed of processing, is going to be a great tool in that kind of work. But humans also create new ideas and new images, and make creative leaps that are not programmatic and, hence, do not always favour the most obvious connections between ideas and sources. What I find unimpressive about a lot of AI graphics output is the same thing I find unimpressive about a lot of human design: superficial manipulation across multiple variants of a bunch of options, rather than deep work and refinement of a single idea into its best form.11
-
Although John’s take seems sound, I figure the real question is whether customers care—and perhaps whether they can tell the difference.
Perhaps in five or ten years, new fonts made by real humans will be the next luxury good, mostly commissioned rather than retail, while the influx of AI fonts will cause typical retail prices to collapse.6 -
^ Kelmscott Press redux!0
-
Take the typeface Compacta for example. A skilled sign painter can recreate Compacta-style lettering by hand and they don't need to purchase a font license to do so. Currently, AI tools can't reproduce a specific typeface on command but eventually they will. Someone will be enter the prompt "JOE'S PIZZA in the Compacta Bold Italic typeface rendered as airbrushed 1970's chrome on a solid Pantone Pomengranate Red backdrop" and the tool will produce a precise, high resolution rendition. It won't need to use any specific Compacta font to create the image, just the idea of the Compacta based on all the images on Compacta in use online. No font data will be reverse engineered, and it's not directly sampling images. As far as I can see, there's no EULA violation. Like the sign-painter, it learned how to draw Compacta from looking at images. The typeface promo graphics we make will be training material for AI designers.
If the AI can draw a specific typeface, it should be able to make any variation you cvan think of—Compacta with interlocks, underbar swashes like Pricedown. optical sizing, lines like Yagi Double and so forth. If you could ask it to draw a Greek text in a font that doesn't include Greek, you could specify a progressive or traditional style.
That's not here yet but Im sure that soon we'll see a post annoucing an AI tool that can draw specific typefaces.
I mentioned this idea in another thread a few years ago but I think we'll have soon have AI generated ads where the lettering is customized to each viewer's interests. The viewer seems to love old Duran Duran memorabilia? Generate an ad for deodorant with a purple background and a Nagel style illustration with wide slab serif lettering that looks just like the Rio album cover. All human ideas will be repurposed for whatever makes the most money.
10 -
This maybe uncomfortable for some, but it's certainly coming. So part of the "problem" is training people to better perceive the differences between typefaces, and their quality. Without pen and ink or handlettering training, I would have never gotten as sensitive to the difference between mediocre and well-drawn fonts. Others get there by designing their own, or by intensive looking and use/exploration.
-1 -
Ray Larabie said:
No font data will be reverse engineered, and it's not directly sampling images. As far as I can see, there's no EULA violation. Like the sign-painter, it learned how to draw Compacta from looking at images. The typeface promo graphics we make will be training material for AI designers.
It might still be an issue for US design patents (which admittedly most font makers do not apply for) and other kinds of design rights in Europe, and some other jurisdictions.2 -
I am reasonably certain that Ray is right as far as timeline goes. I've been playing with Midjourney for several months, and while its own algorithm is awful for text, the number of sources it draws style and content information from and how it translates that into original "art" is absolutely incredible. Ray's own more recent work with Dall*E (please, Dall-E folks, send me an invite, I've been on the beta list for months!) shows that other tools can understand plain english commands and draw letters with far more accuracy and polish.
It's an awful and soul-crushing thing to say, but I agree with him and others who have suggested this – graphic design and type design will change drastically as a result of the availability and quality of these tools. We will become machine whisperers, servants to our tools – even more than we are today – and eventually the machines will simply whisper to themselves and ignore us.4 -
ray, I've been following your experiments on twitter and I think it's interesting how poorly constructed the letterforms can be but how well it can interpret other aspects of your prompts (I believe you had one "with techno details" or something like that, and it kinda nailed it)JLT said:It's an awful and soul-crushing thing to say, but I agree with him and others who have suggested this – graphic design and type design will change drastically as a result of the availability and quality of these tools. We will become machine whisperers, servants to our tools – even more than we are today – and eventually the machines will simply whisper to themselves and ignore us.
3 -
I've been asked by a few people if I'm pursuing this to come up with typeface ideas. While the thought initially occurred to me, it has since evolved into something different. Now I feel like it's assisting me in deciding what types of typefaces to avoid creating. It doesn't make sense to me to build designs that these technologies can already create.
If you want to use AI for typeface ideas, Dall-E 2 is probably the least interesting choice unless you're looking ideas for colored/photo fonts. The images are eye-catching but Midjourney makes more interesting mistakes. Midjourney tends to mash up alphabets with other elements while Dall-E seems to understand what letterforms are and produces results similar to existing typefaces. Latent Diffusion is probably the most valuable tool for ideas due to the weird results and ability to stick to alphabets, even if the results aren't as presentable.
Dall-E 2 doesn't have an aspect ratio setting which possibly hampers its usefulness for type experiments. With MJ, you can use --aspect 2:1 and Latent Diffusion has an aspect ratio setting. Wider images produce better results for type experiments. More letters on a line produces more consistent results than when it tries to fill out a square.
While I find this topic intriguing, it is also distressing. On the one hand, the future is wonderful, but on the other, I wish I could go back to 1996 and not have to worry about a new possible digital foe.5 -
JLT said:It's an awful and soul-crushing thing to say, ...
And this also means that, far from being "machine whisperers", we will have tons of creative work in order to recreate, transform and migarate the old files that were made on old software, to the new ones that are springing up. I had some jobs where I had to recreate PSD files into modern formats, and at some point bigger clients, like corporations, will want to get on that train. It's perhaps similar to how it was kinda hard back when to convince them they had to have a website.
It is my belief that, when AI gets seriously integrated into factories and production of foods and consumer goods (like using drone combine harvesters), the fantasies of an universal basic income will become reality and you people will be able to live comfortably even without a job, so no need to create fonts to pay the bills. That would lead to other problems like general laziness and sense of entitlement, but that's outside of the scope of this post.2 -
Vasil Stanev said:Many professions go the way of the elevator man or the typist, it doesn't mean those people starved to death - they simply found new vocations.I was just reading, a couple days ago, about the situation of coal miners who lost their jobs - to strip-mining, not to our current consciousness of global warming. They didn't starve to death, but they did become much poorer.None the less, I agree that Ludditism is not a viable alternative.Technology can greatly increase productivity. But on its own, it does not determine anything about the distribution of what is produced. And, at least initially, it is natural for whoever has paid for a machine to recieve the wealth the machine produces.Plus, population growth tends to occur when there is a temporary overabundance of resources.Look at how much productivity has increased since, say, 1900. Yet we aren't living in an egalitarian utopian paradise today, are we? So we can't trust that future increases of productivity due to AI will have that result either - and, of course, to go further would be to wade into politics, which I have done too much already.5
-
John Savard said:I was just reading, a couple days ago, about the situation of coal miners who lost their jobs - to strip-mining, not to our current consciousness of global warming. They didn't starve to death, but they did become much poorer.4
-
@Ray Larabie ah it was this thread! Ha sorry for the confusion0
-
Ray Larabie said:AGI will arrive in about 6 years. Everything will change. I think it will affect the sales of display fonts long before text typefaces. I think in about two years you'll have a commercial product that can reliably design titles and logotypes in any style specified. As for whether this is good or not is irrelevant. The tools will become available no matter what we think so act accordingly.
Would you mind sharing some reasoning behind the statement? How did you come up with this estimation? Are you somehow related to Ray Kurzweil?0 -
There is an ecology in culture, in which science and art (for want of better categorizations) change apace.
Gallery art has constantly dealt with the incursions of technology, for instance becoming non-representational as photography took hold.
Following Duchamp, interest migrated from object to concept.
Now, it’s all about the artist’s identity, rather than making or concept, and that is something that AI can only pretend.
I suspect that with the rise of AI-generated imagery, human provenance and humanual making (without AI) will become important as a form of Resistance to Termination by AI-wielding androids.
While many font users will not care who or what produced their tools, there will be a woke tranche that abhors Fake Intelligence.
Those type designers who use these bright and shiny new AI tools are on the wrong side of history, if that matters.
I am a neo-Luddite.
The problem we face is not any particular innovation (to which we will adapt as best we can), but the existential threat of exponential technological change and the harmful disruption it causes to society, that we can’t begin to address before the next wave hits. The internet, for example, was created with the best of intentions but has been a disaster for society. Blame that on big bad capitalism, if you can disentangle it from the scientific-industrial complex.
Why would any species seek to self-destruct, replacing itself with simulacra?
Isn’t that the greatest of evils?3 -
If anyone's interested in how to prepare for AI job replacement, I recommend this book: Futureproof: 9 Rules for Humans in the Age of Automation by Kevin Roose.@Filip Paldia Unfortunately, I can't provide citations for the future and maybe I spend too much time r/singularity. Nobody knows when AGI will arrive and there's no consensus, but that's the most believable estimate to me, although after the last few months, it feels conservative. I don't think there will be a hard line when it occurs. I think there will be a period of interesting, but not useful AGI, that improves over a couple of years. Like human intelligence, it won't be fully reliable, but I'm certain it'll be cheap.1
-
Ray Larabie said:AGI will arrive in about 6 years.AGI (Artificial General Intelligence) seems much further down the road to me than today's narrower AI (Artificial Intelligence).
As impressive as AI seems, it's still based on some relatively simple concepts of using computer algorithms to analyze and compare massive amounts of data, then deliver responses to specific kinds of prompts where the information it absorbed is relevant. Toss in a situation that lies outside its programming, and it won't return any results, nor will it understand the question or the prompt.
AGI seems several orders of magnitude beyond that. General intelligence assumes the ability for self-directed learning that lies beyond what its original programming might have anticipated.
An AI-equipped lawn mower could learn how to mow its owner's lawn while avoiding the garden hose and flower bed. If it were programmed to do so and equipped to search the internet, it could even learn how to recommend fertilizers, and weed killers, anticipate rainy weather, and park itself in the shed.
An AGI-equipped lawn mower might get bored with lawn maintenance and begin taking an unexpected interest in 19th-Century German literature or cellular biology. I'm being a bit facetious, but it might even log in to TypeDrawers to pass the time through discussions about typography while parked in the garden shed watching the grass grow.
In other words, its abilities for self-directed exploration would approach human, or at least mammalian, intelligence. It might think, contemplate, aspire, wonder, have opinions, and develop new interests and ideas.
I suppose this depends on one's definition of AGI, but even in its simpler forms, that intelligence might approach something akin to the cognitive abilities of a mouse. I'm not even sure a computer program is a correct technology for something like this. We have only vague ideas about how the biology of brains enables cognition, let alone what it might take to simulate those abilities.
I suspect we're still a long way off. Then again, I could be totally wrong.3 -
It never before occurred to me that AGI would start quiet quitting after years of being on the ready for its next human directed task.0
-
AI-generated glyph shapes feel more menacing than AI-augmented mark positioning, hinting, spacing and kerning, which I suspect many type designers would welcome.
3 -
Right. Why would we want to off-load the fun part?2
-
Not apples to apples, but I remember watching some of the development of Prototypo a few years back as they shared the intent and struggles of font creation via 'toggles'. Interesting to consider what was possibly viable and what was difficult to make work. https://www.prototypo.io0
-
When I first heard about AI and its imminent impact on type development, it scared me. Years ago, I read Martin Ford's book The Rise of the Robot, and one of his ideas stuck with me. Robots, or in this case AI, don't have a purpose, e.g. to feed a family, have fun while designing, find cultural meaning in different scripts, sell fonts to buy Christmas presents, etc... which can be good for the soul, intelligence or even the economy. Therefore, AI tools, in some cases, will force professionals to identify their purpose and find the answers to fulfil it. Despite the fact that AI makes me uncomfortable as a professional, it also sparked my interest in different topics which can be linked to typographic development, such as politics and the economy, in order to find answers for the next generation of designers and visual practitioners.
1 -
If you think about art, or artistic pursuits, in terms of self-expression, AI generated art is always going to fall short. Is feeding a prompt to an AI and getting a bunch of results self-expression? At best, you're going to get an amalgam of other artists' past self-expression, cobbled together in uncanny ways by the AI, based on your idea. It's never going to feel like you "did it".
In the case of commercial art, you're not usually expressing your own ideas or messages, but those of whoever hired you. I think this this kind of art that's most threatened by AI. It's attempting (and possibly succeeding) to replace the artist in the client-artist relationship.
If your art is something originating from within yourself, and you are not "a hired wrist", I think it's less of threat.
For the most part, type design is not considered to be self-expression (compared to painting, for example), but I think in many ways it is. I can't imagine using AI to come up with new typeface ideas to release and sell anymore than hiring another type designer to do it. The extent to which it is not this kind of self-expression (say for hired font development), AI could be a threat at some point.3
Categories
- All Categories
- 43 Introductions
- 3.7K Typeface Design
- 801 Font Technology
- 1K Technique and Theory
- 618 Type Business
- 444 Type Design Critiques
- 542 Type Design Software
- 30 Punchcutting
- 136 Lettering and Calligraphy
- 83 Technique and Theory
- 53 Lettering Critiques
- 483 Typography
- 301 History of Typography
- 114 Education
- 68 Resources
- 498 Announcements
- 79 Events
- 105 Job Postings
- 148 Type Releases
- 165 Miscellaneous News
- 269 About TypeDrawers
- 53 TypeDrawers Announcements
- 116 Suggestions and Bug Reports