Hello,
as far as I can remember, I read in an old book from around the 70's that the printing press created more job for the people that did the illuminations, so it actually didn't put monk and laymen scribes on the street - instead it created much more work for them. I use this as an example that AI actually creates more work for designers, instead of destroying their profession. (I sure have had much more work since DALL-E2 came along, from retouching images with 9 fingers on one hand to replacing creepy faces and whatnot.) But now I'm wondering if my information about the relations between printers and scribes is correct - it was an old book, it might not have been well researched.
0
Comments
Today, the demand for graphic design is already substantial, but design programs are graduating far more designers than there are jobs. In addition, crowdsourcing sites, inexpensive overseas competition, and do-it-yourself sites, such as Canva, have driven down wages in the middle and lower tiers of the profession that don't need bespoke and integrated solutions.
I don't see how artificial intelligence (AI) will mitigate these challenges or create a demand for more designers. AI might temporarily increase the need for illustrators to retouch what AI can't yet do well, but that's a temporary need.
For example, I created the attached illustration this past week using Midjourney AI but needed to add the neck, shirt, and suspenders by hand because AI performed poorly with them. I still need to correct some mistakes it made with the eyes and tweak things here and there. Two or three years from now, I'm reasonably confident alterations like these won't be necessary.
Of course, graphic design is not synonymous with illustration, so how AI will affect graphic design is yet to be determined.
More to the point for this forum is how AI will affect type design. I can easily imagine using AI technology to create new typefaces. As with illustration, a type designer might still be needed to tweak and fine-tune, but given the fast pace of development in AI, this need might be only transitory for all but the most innovative or unique typefaces.
I can say with some certainty that AI will have an enormous effect on nearly all creative professions. Precisely what will emerge from this sea change seems much less certain.
To answer your question,
- I have recently had much work from companies that thought they can let the marketing team create their design assets and the results were... well... predictable.
- I've had people that tried to enhance their photos and ended up with creepy stuff. Mostly hands with a gazillion fingers and faces out of the netherworld.
- I even had an offer I declined, in which a company generated 5000 images for NFTs, but they needed a ghost designer to go through the batch and fix all kinds of asymmetry and creepiness. So this is like a demon creating a devil, and the law hasn't caught up to it yet.
Art is something you pour your soul into. A machine has nothing to pour.
Even if it is possible to program an AI to generate design items, I don't think aesthetics are programmable.
1. There’s not enough data to process
2. It’s too imprecise
3. It can only do average
With some massive effort one might be able to generate some average font in order to tweak it, but in that case you might as well just take TNR or Helvetica and start there.
Or you can generate loads of weird display stuff, which will most likely get repetitive very quickly, so I don’t see as a threat to the whole industry either.
I predict there will be a short spike in AI-generated fonts that will produce things similar to the surfer fonts of the early 90's, and then things will return to the status quo.
In general, I regard AI as a technology that generates much monstrosities content that has to be operated on to get it to look decent.
In regard to underselling of services by the good people in the developing world , I would say that the theater is full, but the loge is still pretty empty. I know I have hired people from developing countries and, out of 40+, only 3-4 cut the mustard. There are many people that can make a logo or a business card for $2, but above that basic level supply decreases sharply.
I always thought about supply and demand in the design and other industries like two reclining to one another pyramids that touch at their tips. At the bottom are the people that can hardly do anything and the ones that can hardly afford better services than theirs. As you go up, eventually you get to the biggest players - the biggest multinationals hiring the biggest agencies. There may be some skipping of levels, when a bigger client hires down or the designer gets an unusually big client (one and the same), but in general this is the structure, at least IMPO.
The salient feature of artificial intelligence is its ability to learn from experience. However, today's AI programs focus on learning narrow tasks. An AI program written to learn chess can master the game in just a few hours and beat a grandmaster. However, that same program won't be able to play tic-tac-toe.
The popular text-to-image AI generators, such as DALL-E and Midjourney, learn from the enormous number of images and captions found on the web and the iterative learning that occurs as people use the software. These AI applications can't play chess or design type — they weren't programmed for that.
I'm surprised by the comments saying a shortcoming of AI is its difficulty with precision and detail — computer processors make millions of calculations per second without making a mistake. The current problem the text-to-image generators have with hands, eyes, and geometric objects has nothing to do with precision. It is, instead, primarily the result of programming limitations and not yet having learned what's required to generate acceptable images of those objects.
Artificial intelligence is still in its infancy. In the coming years, AI technology will improve enormously.
I don't know of any AI programs designed to create typefaces, but I don't see it as a problem beyond what's currently possible. There are fundamental rules regarding type: baselines, overshoots, x-heights, grid units, sidebearings, etc., that programmers could easily incorporate into AI algorithms. Tens of thousands of typefaces exist for the AI programs to analyze to determine acceptable parameters for individual glyph shapes, commonalities, divergencies, color consistency, weight uniformity, etc. Of course, not every typeface is well-designed, so the AI might only consider subsets of specified fonts.
Assuming it's a word-to-image user interface, a type designer could tell the AI program to return a dozen iterations of a typeface. Each could contain specified Unicode positions that combine selected features from, for example, the most popular versions of Garamond and Bembo using the proportions of Gills Sans as a starting point. From those iterations, the human typography designer could choose one or two and request to see more iterations using slightly different input parameters.
This tweaking could continue indefinitely until the font designer was satisfied with the overall look. At that point, the AI would use everything it's learned from analyzing other typefaces to ensure that lines were straight, sidebearings were optimized, control points and tangent vector positions refined and harmonized, kerning pairs created, hinting added, etc.
Even without AI, font creation software can do reasonably good jobs with some of these processes. Combined with AI, I suspect most of the little adjustments and tweaks to compensate for optical illusions and weight balances could be accomplished.
A key factor in this, however, would be the directions and judgment of the type designer directing how the AI proceeded during the development of the typeface, which brings me to creativity. I'm not sure that creativity is anything more than one's ability to combine various aspects of one's experiences to produce novel solutions to problems. In essence, that's also the fundamental principle behind artificial intelligence.
I'm not saying that AI font design software is right around the corner. What I am saying is that it's possible. If so, AI could revolutionize typography design. For example, Adobe might include font modification AI into Indesign. A graphic designer might use, say, Minion for body copy but tell the AI feature to make the type a bit softer or more friendly. The AI algorithms could then create multiple iterations of Minion on the fly based on what the AI software at Adobe had learned from other designers using those same terms. From that point, the designer could choose the version she wanted or make further modifications. As I said, it's not here yet and Indesign might be obsolete long before then, but I think something like what I've described is doable.
Here’s my new AI fear: getting left behind. Type designers could benefit from AI tools, but we’ll probably be the last to get them because it’s a niche product. How many people in the world are working on font design software? A dozen? I don’t need AI to design my alphabets for me, but there are so many areas AI could help. Imagine making a plus sign and the AI completing all the mathematical symbols for all masters. Make a few sample glyphs for the heavies and ultralights, and the AI fills in the rest. Add some accented characters, and the AI creates all the anchors and non-spacing diacritical marks for all masters. Filling in the name tables and metrics correctly? Filling in the PANOSE table? Just kidding—even the most powerful AI couldn’t decode that mess. While graphics designers, typographers, and illustrators gain access to powerful time-saving tools, in the late 2020s, we’ll be using the same font design applications we’re using today. By the mid-late 2030s, I doubt any of that will be relevant, as you’ll have software writing software and everything we know will change. But meanwhile, for type designers, I predict business as usual.
I can see AI doing a good job in the near future with lettering. Fonts are complex systems, and AI so far doesn’t have a grasp of characters working interchangeably next to other characters. But in lettering, characters only interact with their neighbors. My go-to example is the Aerosmith logo, which can’t exist as a font. AI already excels at that sort of free-form style, although it doesn’t know how to spell yet.
Over the coming years, AI technology will become gradually ubiquitous and used for everything from complex long-term weather predictions to coffee makers learning how their owners want their coffee brewed in the morning. At some point, I suspect AI will affect almost everything we do in unexpected ways, including type design.
Let’s not forget that those impressive AI that we see by google and apple use the entire world of users doing things everyday. Adobe might have access to a lot of design works to do something to. But take even driving, a skill that most people can learn during a few months, and it took years of top-notch engineers to teach AI to drive decently. So when it comes to type, which takes years to learn and very few people have the skill to provide AI with data?
Now a jump to the year 1985 with the first affordable PCs. I remember creating the slides and handouts for my management courses in AI completely on the PC. No graphic designer or typesetter involved. 1988 I authored a book with DTP, 2003 one in 4 colours with Indesign and PhotoShop. The ~100 photographs of the watercolour paintings I took myself with a digital Nikon.
Imagine how many professions and workload disappeared between 1970 and now. Especially scientific typesetting (tables, formulas) and graphics (drawings, charts) is done by authors directly. Similar for newspapers. At least: why printing on paper at all? There was a massive restructuring in last 20 years from printed product catalogues to web-shops. Internet made it possible that manual image correction is now done in countries with cheap working costs like China. Of course high quality and creative artwork like photographing fashion, people, cars, food etc. is still a business.
Let's keep a little bit on image processing. Have you seen what iPhone 12 is able to do? You can change depth of field in portrait mode with a slider on the stored shot. To be honest I can't understand (only guess) how this works. The usability enables laypeople, even children, to take high quality photographs without technical knowledge. Compare this to programs like PhotoShop, which are only a confusing collection of tools. Font editors compare in complexity and inconvenience to PhotoShop. Thus a lot of time is wasted on learning the tool, technical problems etc. and not much time is left for creative design.
Back to AI. In the last 10 years we can watch extremely progress in the application of deep learning. Speech recognition, translation, optical recognition, music, life-sciences. Mostly it's some sort of statistically modelling, multifactorial regression analysis of high-dimensional data (up to millions of dimensions). This has some problems that human thinking also has: never seen before, popularity bias, garbage in garbage out, too simple reasoning.
I highly agree with @Cory Maylett about text to image systems. They are very impressive. It's possible to specify "paint in the style of", some results you would buy, others look like the work of a PhotoShop beginner, e.g. "Queen Elizabeth in a bikini" on a first generation system. Also "uncropping" is impressive: e.g. using the famous painting "Girl with a pearl earring" of Jan Vermeer van Delft and creating a room around it. In music at leading edge of the science e.g. Widmer et al. from JKU Linz, Austria, they showed that music can be played by machines in the style of a certain pianist like Arthur Rubinstein or Friedrich Gulda, and the audience takes it for true. They can measure the differences between personality and mechanical interpretation. Even if such systems are not perfect, what's wrong about 99% or 99.99% and human designers given it the last personal touch? It's like telling a human assistant to improve this or that.
Type design is still an art and a handycraft as it always was. That doesn't mean that there is AI research concerning generating glyphs in a certain style. I know some papers for e.g. generating all Chinese glyphs from fewer style (brush, pen, Gothic) samples. Same for terminals and other details which can be learned by AI. What's about spacing and kerning? Not impossible. IMHO machines will be better than humans ihn a few years. I agrre with @Ray Larabie that generating all missing glyphs across fonts will save a lot of time.
In my case I don't craft any glyph by hand for reconstruction of historic fonts. They should be as similar as possible compared to the originals. If they had no kerning, then it's ok.
I predict that future generations will be displeased with old earthlings not making room for them by dying, so they will spread to other planets and other places for the sheer challenge of terraforming them. Earth will be a holiday destination, a quaint world that Marsian and Oort humans visit like we are now visiting the Acropolis in Athens and watch a replay of some old Hellenic thater play. Space humans will be amused by a population that hasn't altered their biology the same way people with many tattoos and piercings would look at some grandma born in the 1920's.
So it's basically The Hyperion Cantos by Dan Simmons.
Midjourney v4 is good with type
I’m presently designing a typeface based on this 1966 calendar by Bob Gill, from his book Bob Gill’s Portfolio of 1968. AFAIK, there are no online scans of this page (until now).
I will change the big letter style and the inset words in my font(s).
(BTW, Gill’s idea was likely taken from 19th century banknotes, which I came across in a Typecon talk by Tobias Frere-Jones. And for all I know, there may already be a typeface “out there” which has this premise.)
One of the reasons this kind of type design interests me is that it is conceptual and outside the realm of facsimilizing, and now AI, which only does sequels and mash-ups.
It would perhaps be possible to create something similar with text “prompts” in graphics-generating AI apps, rather than humanual drawing in Fontlab or Glyphs, and in that sense, AI might be a useful production tool.
However, I would still prefer to draw by hand, with the unique organic-aesthetic “signature” that we all have, and anyway, how could AI come up with such an idea for a typeface in the first place? It would have to create a facsimile of me, my philosophy, taste, attitudes, practice and the immediate events of my life, and scan the entire contents of my library. Ima keep that private!
@James Puckett Thank you for sharing the post here.