AlphabetMagic. My first AI experiment
Comments
-
Exactly.. its even worst... I share your worries, thats why I have been very conservative about it and haven't released into the public as open source as I usually do with everything else. Just imagine this in the hands of that big corporation that buys everything.. they will get it, sooner or later. As Ray said: It's fascinating, and it sucks.
die in fondiue;
I do have ethics, even if you don't like it.. Doyald got it better than you do, had no problem with it and helped. You are being more papist than the Pope! It will be nice if you used your real name for making personal attacks, but you wont, so I don't care, haters are gonna hate. But even if you hate me as much as you do, I think we are in the same page about the implications of AI. I guess you can appreciate that i'm not releasing AlphabetMagic into the open.
1 -
>> die in fondue
I love this nickname!
Regarding your “ethics”:
>> Option 2) Sell the code to Google, or Adobe, or MT, or whatever
Nice! How much would you charge for your integrity?
-2 -
Hahaha I knew you will!
> Option 2) Sell the code to Google, or Adobe, or MT, or whatever
That option was just part of brainstorming crazy ideas out loud... you focused in option2 only, but I also presented many other options ranging all the way from making it open source so anyone can use it... to plain deleting it from my computer so no one can use it.
> how much would you charge for your integrity?
That's a very good question indeed.. I have been thinking a lot about it lately.. but also I don't think anybody will be interested in buying my integrity, what would the do with it? I guess they will be more interested in buying my know-how and expertise, whatever it may be.0 -
Testing Page time!
Experiment 12 is up at the AlphabetMagic github page.
I made 4 new fonts for you to play with, each one inside his own folder.
For each one, I generated the png artwork using Alphabet Magic at 1024px
Then I imported the auto-traced artwork "as is" inside GlyphsApp and run HT Letterspacer for quick automatic spacing. I wished I had DTL autotracer for this experiment, but don't.. so I had to settle for the standard Potrace.
No node dragging done to clean the imported artwork, no manual spacing, no kerning at all.
All I did was placing the glyphs on the baseline quickly, with no much care into it, so we get an uncontaminated test.
The glyphset available for all the 4 fonts is abeghinopsvy
(in both Upper and Lower case).
You can play with it in the Tools section of the Testing page- On the "Made form" box, put: abeghinopsvy
- You can use the checkboxes "Some" "Initial" "AllCaps" Side by Side" to play with different mixtures of upper and lowercases
- In the "Size" box, you can choose whatever size you want, but if you input numbers bigger than 200 or smaller than 20 you also get cascades of different sizes. Try it out!
- You can use the browser "print" function and save as PDF for quick prints
0 -
Very cool!
I still think that eventually, you'd want AI to take care of spacing and kerning as well. If it can draw outlines, surely it can do that? [Edit: currently I think the spacing is lagging behind on the drawing quality a bit.]
What is your reason for the limited glyph set?0 -
Jasper de Waard said:I still think that eventually, you'd want AI to take care of spacing and kerning as well. If it can draw outlines, surely it can do that?
Spacing was not AlphabetMagic, it was done using HT Letterspacer... But I only did a quick run using pretty much the standard parameters. If you take your time to set it up properly, you will get much better results.Jasper de Waard said:[Edit: currently I think the spacing is lagging behind on the drawing quality a bit.]Jasper de Waard said:What is your reason for the limited glyph set?0 -
0
-
These samples look pretty good to me, but I'm not qualified to judge these kinds of fonts. Despite the resolution and autotracing, I can see consistency with the serifs. Terminal goals feel unclear in all of these. Look at “agy” isolated and there's little to connect them.
1 -
I think the problem with these text faces is that they not only look derivative, but they’re also boring and rather characterless. They look like what they are: cobbled together from images of other things, without benefit of internally guiding principles. And typeface design is all about internally guiding principles. So all these examples have the appearance of a first draft because there is only the application of an external idea—the prompt—, without the steps to gradually refine a design in reference to itself. And the external ideas are perhaps not very interesting ones, or the derivative nature of the machine learning process ends up making the result not very interesting.
Now, over the years, I’ve seen not very interesting initial ideas become interesting typefaces because of the way the ideas were explored—and sometimes abandoned along the way—, and I have also seen interesting initial ideas become bland typefaces by being overworked. Until AI can learn not only to cobble together references to other things, but also to understand and apply iterative processes of internally informed exploration and refinement, it seems unlikely to come up with interesting text faces. If it can come to work that way, though, it could be a useful tool to rapidly explore multiple directions a design could take.
4 -
Hi John, welcome!PabloImpallari said:Quick update on the "Testing Page" experiment:
Training the network for "top quality" contours as I would like it too, would have taken 40 hours.. to much for an experiment. I reduced it to a more acceptable time of 4 hours. It was about to finish training (96%) when I got kicked out of Collab since Im on the free plan and they give preferences to paid users.- While increasing the image resolution from 512px to 1024px, on the other hand, I have decreased the training from 40 to only 4 hs. Hence results looks shitty.
- Another possibility may be that I added too much of the Fleischmann feeling to the prompt. And as soon as I did that, the "internally guiding principles" went crazy (diagonal stress /e vs. vertical stress /o, etc.. You get the idea).
- A combination of both 1 and 2 at the same time.
John Hudson said:
AlphabetMagic is the perfect tool for that! "iterative processes of internally informed exploration and refinement" is exactly what sardines do! As we have seen from previous experiments: You can get as crazy, original and creative or as boring as you want.. Ranging all the way form the Seashore Inspired Number 2 experiment, to the the Monster Display and Agate experiment, to the "Make it Boring!" experiment with the speedball font... to this (failed?) testing page experiment n12. AI in general is perfectly capable of coming up with new and interesting output, as Nick has noticed.Until AI can learn not only to cobble together references to other things, but also to understand and apply iterative processes of internally informed exploration and refinement, it seems unlikely to come up with interesting text faces. If it can come to work that way, though, it could be a useful tool to rapidly explore multiple directions a design could take.
Since I wanted to try the auto-tracing, I focused so much on testing the "pixels per glyph" aspect of experiment 12, that I forgot about the importance of the alphabet shapes this time. Even more so when we come with high expectations from such and impressive experiment as as the one from post 636.
For me is a lesson learned: It all depends of the training!
I now consider experiment 12 as "Failed" and will try again allowing more time for a good training. Maybe I can also decrease image resolution a little bit, and later perform image upscaling for the auto-tracing stage. I will explore all that, even if I have to wait for next Sunday for free Collab time to be available again.
Also, I can perform another experiment with different training levels on the very same set of training images... so that AlphabetMagic can behave as an amateur, as an intermediate or as an expert type designer.
2 -
A sturdy form that can endure lower resolution autotracing may be more suitable. In tough terrain, a square slab like Clarendon looks good—terminals are more visible, and sidebearings are less complicated. As a first assignment, I wouldn't expect a type design student, human or not, to create a premium book typeface. I get why you'd want to demonstrate its expertise by displaying a “correct” text font; however, there are typeface styles that would work better with the existing processing constraints.
2 -
As we have seen from previous experiments: You can get as crazy, original and creative or as boring as you want.But I’ve not seen anything that I want to use, and I think you’ve missed my point, which is about the reflexive aspect of type design. The prompts used in AI generation are at once too specific—‘too much of the Fleischmann feeling’—and too reliant on gerneralisations elsewhere—e.g. from where does the AI get its notion of ‘boring’—; whereas, the prompt I would give to a human designer would be ‘Now look at what you have made and figure out how to improve it’ and then ‘Now do it again’, iteratively and reflexively. Oh, and ‘Decide whether this is worth continuing with?’Also, I can perform another experiment with different training levels on the very same set of training images... so that AlphabetMagic can behave as an amateur, as an intermediate or as an expert type designer.
Well, no, because it is fundamentally not doing what a type designer does.2 -
I think John is saying is what I was trying to say earlier. For this to really be useful, you want to be able to iterate over whatever the AI gives as output. Currently you can iterate through language (if I get it correctly) by saying 'more boring please', but since language is very imprecise (as Pablo pointed out earlier), it would be very useful if you could communicate with the AI visually, i.e. by making changes to the visual output. You could show it what you want, rather than tell it. So if you'd make a change to one letter, the AI would understand that change and apply it throughout the alphabet. Of course, this is just dumping ideas on Pablo without any idea of feasibility, sorry!1
-
Well... yes and no... sort of.
For each alphabet generated you get the artwork, but you also get the "seed number", which represent the initial b&w noise used to generate the artwork... without getting too technical, think of it as the initial position of all the little sardines before they start swimming. Using that seed, you can also control the number of iterations the sardines do, or how long they keep swimming, until the arriving at final position.
For example, you can set the iteration to 10 (super fast, 1 second) and create 100 concepts. Out of those 100 concept, you can pick the one you like the most and trash the rest. That will be sort of similar to "this is worth continuing?" for the 99 rejected concepts.
For the chosen concept winner, by using the seed number, you can now recreate it exactly, and also increase the iterations over it to from 10 to 100, 200, etc.. (slower, 1 minute, or more). That will be sort of similar to "Now look at what you have made and figure out how to improve it" and "Do it again". And now the little sardines will keep the iterative process going over and over, improving it on each iteration.
At this point they are going to figure it out how to improve it by themself, which can be good or bad depending what you want... but they are missing the feedback from the teacher or the master. We can improve the prompt but that will produce an entirely new alphabet most of the time, and we don't really want that. If we want to keep the existing alphabet artwork and only refine certain aspect of it, there are currently a few methods for that. I wont get in details, but for a quick example you can see the typewriter alphabet in post 637. Is a visual modification over previously produced artwork. That will be sort of similar to a student processing visual feedback from the teacher.
As I explained before and Jasper noted, the weak typographic language corpus is a limitation, so we mostly to relay on visual communication.
John, we you say "I’ve not seen anything that I want to use".. well.. we can't guess what you really really want to see, or what you will really like to use, unless you get much more specific than that, be it by language or by visual examples.
There is some sort of general consensus of what an interesting alphabet will look like, but not entirely.. I guess everyone of us here will have slightly different ideas of it... or completely different ideas too.
Lets say that somehow we got together, discuss over our goal, work together to train the network specifically for your preferences, and so on... You are very demanding so I wont risk to say that we can get 100% to your satisfaction,.. but maybe I can risk it that you will approve the results, in pretty much the same way that if you teach your very own student all the way from day 1 to graduation.
> Well, no, because it is fundamentally not doing what a type designer does.
We can guess, but we can't be sure until the experiment gets done. My initial guess is that, as you say, it is indeed fundamentally different, but I'm not sure about what that differences will be, so it will be interesting for me to explore it.
To summarize: You don't have to settle for the artwork as initially produced, you can also keep iterating over it using a variety of methods
1 -
John Hudson said:e.g. from where does the AI get its notion of ‘boring’
Since we are the one doing the training... the AI will get the notions from us!
If I train a network for you.. the AI will get the notions from you!0 -
This raises an interesting possibility for AI art generators of all kinds: lots of the “learning” consists of users picking the best results from a choice of several, then iterating that process, ja?
Could subversive anti-AI users instead repeatedly pick the worst results, and thus make the AI “learn” the wrong behaviors, undermining its utility for everyone else?
It feels a little too simplistic, but I wonder nonetheless.
0 -
Could an iteration be reflexive in the sense of ‘I like the i and the n and the a: keep those as they are and improve the rest of the shapes to reflect those letters.’? That’s the kind of refinement based on internal principles that is what a lot of type design consists of.
The problem I see with what you describe is that the AI only ‘improves’ a design by being given more time to do more of what it has already done for that seed number. It can’t take the output of one seed number and make decisions about what is good or bad in that design and swap in different ideas for individual shapes.
I am reminded of a story that Lida Cardozo Kindersley told about when she first went to study stonecutting with her future husband, Richard Kindersley: she had previously trained in graphic design, so was used to producing multiple possible deign options for clients, and started doing the same for inscription layouts. Richard advised her to only produce one layout, and then to refine it and refine it until it is as good as it can be.
The AI seems good at quickly producing mockups of lots of possible designs, but I wonder if it can do the reflexive refinement of a design?
_____
I also wonder if an AI could be taught concepts in letterform construction such that, e.g. TypeCooker suggestions could be taken as prompts?0 -
Usually when I design a typeface, I try to think of a specific use, rather than what the design will look like. With that in mind, see if you can generate something based on an imagined client's criteria.A signage font for a computer museum. The client wants to emphasize the MICR style (Data '70, Computer, Westminster) Lowercase is required, but there are places when tight vertical lists are needed so the as/descenders need to be tight. It's mostly going to be used for walking or parking cars, so follow principles of airport signage (Frutiger). It's in Germany, so there are lots of very long words…keep it compact but not compressed. Italics and multiple weights are not required. The client likes the unusual MICR thick-thin choices, so the less conventional those are, the better.That's the kind of thing I think about before I start designing. That way, the typeface has purpose, even if it's fictitious. Whether it connects with an actual customer who requires it is a different story.1
-
I'm not sure how much of this you are willing to share, but I think it's becoming obvious that we still don't really understand the proces behind the fonts you have shown, Pablo.
Is there a text prompt? If so, what does it look like? Do you select X designs out of Y options? How often do you repeat this? Do you add to the text prompt as you go along? How much of it is 'learning' (potentially useful for future projects), and how much of it is just 'directing' (only useful for the project at hand). In other words: how do you get the AI to do what you want?
2 -
Looks like the source inputs to a ML model can be retrieved with access to the model....5 -
Experiment 13 is up!
This is the Test Page experiment re-done, with proper time allowed for training.- Artwork by AlphabetMagic
- Auto-trace by the standard Potrace and quickly imported into GlyphsApp
- Minimal node dragging (only to fix the /p and the uppercase /I)
- Autospacing by HT Letterspacer
- I did a quick round of manual kerning, because... well.. she deserved it
(lower 2 lower and caps 2 caps... no caps 2 lower at this time)
Testing page tools section- On the "Made form" box, put: aobdefhghimnopsvy
- You can use the checkboxes "Some" "Initial" "AllCaps" Side by Side" to play with different mixtures of upper and lowercases
- In the "Size" box, you can choose whatever size you want, but if you input numbers bigger than 200 or smaller than 20 you also get cascades of different sizes. Try it out!
- You can use the browser "print" function and save as PDF for quick prints
Files: Experiment 13 Github folder
2 -
Yes, of course it can be retrieved! That's the whole point of the training!
Like being able to unzip a previously ziped file.
0 -
PabloImpallari said:Dave Crossland said:Looks like the source inputs to a ML model can be retrieved with access to the model....0
-
Dave Crossland said:
Looks like the source inputs to a ML model can be retrieved with access to the model....Do note that it can only retrieve so much of its training information.. which presents a problem where most, but not all of the outputs are "original". Which means to be safe from accidental copyright infringement, you'd ideally want to tweak the output generation quite a bit, but that'd imply more work.0 -
Florian Pircher said:I guess if you are asking the machine to make a new Garamond, it should not stray too far from what came before. But, is it interesting to make the n + 1th revival of an existing typeface?
What other alphabets do you see there, besides the -not used- Garamond?
If you, or any other, are able to guess the alphabets that I really used, I will be happy to confirm it!Florian Pircher said:Such an over-fitted use case feels more deprecating of other’s works than the prior examples, whether regarding commercial or libre fonts.
The more I keep doing all the experiments, the more I like the results from the initial post0 -
1
-
Florian Pircher said:I used Garamond as an example since it has been revived more often than Frankensteins monster by now. I could also have used Helvetica or other classics.
Having done my fear share of revivals (libre baskerville, libre caslon, libre franklin and libre bodoni, etc) I can tell you 1st hand what the reason is: People love the classics... they get used a lot, they sell a lot. They become instant hits! People like to use an alphabet that have that classic "name" attached to it... even if the design looks like something different. They don't know what the real classic alphabet looks like... they are not historians... they had only heard the classic names so many times that they are simply assuming that they are selecting a good font when they face the font selection nightmare of having to choose one among thousands... they just go with the brand... very much the same way as you do when you are doing groceries and have to select a shampoo or something... the classic had years and years of publicity to their advantage. And they are tested designs... so they also work. Its a win-win situation.Florian Pircher said:#13 feels Dutch to me, if I had to guess.
Biggest influence is Italian at 38.5%. Dutch influence is 15.3% only.
Besides all this crazy alphabet math, we also get a "feeling" as you say, that is not mathematical. For me -personal opinion only- experiment 13 feels highly French, even if it is only 7.7% as far as maths are involved.
Can anyone guess the french alphabet involved?
0 -
#13 looks something like a Jenson to me.
3 -
Good Eye John! 7.7% to be precise.0
-
Very interesting experiments, congrats!
Could you make an experiment that shows- How it can expand the character set of a given typeface,
- Create variations of weight and optical sizes,
- Draw glyphs for different alphabets (ie Cyrillic, greek, devanagari, tamil)
Eager to see the results1
Categories
- All Categories
- 43 Introductions
- 3.7K Typeface Design
- 798 Font Technology
- 1K Technique and Theory
- 617 Type Business
- 444 Type Design Critiques
- 541 Type Design Software
- 30 Punchcutting
- 136 Lettering and Calligraphy
- 83 Technique and Theory
- 53 Lettering Critiques
- 483 Typography
- 301 History of Typography
- 114 Education
- 68 Resources
- 498 Announcements
- 79 Events
- 105 Job Postings
- 148 Type Releases
- 165 Miscellaneous News
- 269 About TypeDrawers
- 53 TypeDrawers Announcements
- 116 Suggestions and Bug Reports