Are we heading towards a "VariableFontCloud"?
Vasil Stanev
Posts: 775
I have thought about a peculiar possibility for some time:
Is all of this a real possibility for future technology, or rather not? What do you think?
- given that we now have variable font technology
- and that we can blend the outlines of a two fonts to create a new font
- and that there is, for historical reasons, similarity in the shapes of many fonts
Is all of this a real possibility for future technology, or rather not? What do you think?
Tagged:
0
Comments
-
Could be. Custom mixing.
0 -
I am sure that the technology could be made to work but what would be the point? Would you want a font made from this system for real use or would it just be a humorous novelty?1
-
Then, practising type designers will become high-concept style curators.
That is, if their social media identities are free of blemish.7 -
It sounds like the perfect Frankenfont platform.3
-
With enough money you could do it right now. I think you'd have to keep sans-serif, serif and script separate. Blending sans to serif always looks dopey.0
-
See the Typo Labs presentation by Akiem Helmling and Bas Jocobs, starting around 17:00 to 20:40.
https://www.youtube.com/watch?v=eZiP_TIa96g
2 -
Ray Larabie said:With enough money you could do it right now. I think you'd have to keep sans-serif, serif and script separate. Blending sans to serif always looks dopey.
And what looks hip and what looks dopey is super subjective.... But I think you are exactly right that it's dopey to me, because it looks super dated
I always think of Marshall McLuhan with this stuff.
I question the premise of Vasil's timing. This isn't particularly about the future, or font APIs, or OpenType v1.8....
I'd say phototype technology deterministically made super tightly spaced type so easy to do - past the point of good readability but equally beyond what previous type technologies could do so easily. People ignorant of the dominant culture determined by previous type technology were not constrained by that thinking and did what they hadn't seen before, which looked dopey to the metal minded people.
In the same way, I think the initial adobe-apple-aldus-altsys dtp technology deterministically made grunge type so easy to do, and the same thing happened, to the chagrin of the phototype minded people. There were, I think, some interesting class dynamics going on too.
So what's interesting for me here now, is that this GX stuff is older than my sister, so the media determinism has already played out in Latin typeface culture: The people making text type in that time used early interpolation tech to publish "semi serif" styles within closely coupled sans/serif superfamilies, and the fashion appetite for semi serifs came and went.
By the mid 90s there were several tools like FontChameleon and InfiniFont for end users to frankenfont Helvetica to Bodoni to their hearts content.
So I don't expect the kind of "supervariable" fonts Vasil describes to be very culturally interesting.
Underware made one from the fonts in their collection and presented it at TypoLabs (and elsewhere) earlier this year, and the video is on YouTube Most of the latent space is garbage, not even dopey
6 -
It’s been possible to morph photographs since the 1990s, and in the recent yeaes, thanks to the advances in computer vision and artificial intelligence, the results of photo morphing are much more sensible — but you still don't see to many of them around.
These techniques are being used in some specialty application like astronomy, medicine or geography, but not really in end-user-facing contexts.
For fonts, such techniques may be useful for font fallback or digitization of printed matter, but I don't think they'll become a mainstream application.2 -
I think artificial faces are actually on the horizon of being used.
http://www.bbc.com/news/av/technology-41935721/why-these-faces-do-not-belong-to-real-people
0 -
Tons of good responses already in this thread. I agree with almost all of them.
We have had these capabilities repeatedly over the years. Infinifont, FontChameleon. Etcetera.
Early versions of such tech required that the fonts be manually adapted into the system. Still, you could make a FontChameleon "descriptor" for a single style of a font in about a day. Maybe a couple of days to a week depending on how precise you needed to be.
Once you had that, you could blend to your heart’s content.
But you know what? Designers/users did not go gaga over this technology. It just didn't change the world. Maybe that's because Adobe bought FontChameleon to use as a ROM font compression tech so it didn't have full opportunity in the marketplace. But I don't think that's the only reason. Unless you had a particular need, it was just a nice toy.
FontLab VI allows you to blend nearly arbitrary fonts, too, as long as they are structurally compatible. It does this without the same amount of setup it used to require, which is a neat feature, and will come in super handy. But at the end of the day, that is just a tool. Better than the previous blend tool, but not a revolution in and of itself.4 -
As I read this thread, it forced me to think more about the idea. It occurred to me that the whole system would have to be adjusted for conflicts in the font metrics, and that would be a task so mammoths as to be undoable. Someone would have to sit for years on end to tweak every detail and still there will be some things impossible to get right. AND there would be real-world conflicts with the actual designers who did just that for their fonts and would not like the metrics of their works touched, or a conflict with smaller or bigger foundries. I see now this system is not reasonable in any aspect.
But I am glad I posted the thread, the answers are very insightful.1 -
Its' reasonable if it had a big enough budget. Let's say Omnicorp throws infinite cash at a universal font project. You start with a spec that takes into account alternate forms like binocular g, monocular a, straight t. The spec would have compatible metrics and a range of x-heights, ascenders, descenders. Kerning classes would be standardized, all that stuff accounted for by a think tank of leading type designers. Once the spec if finalized, a hundred type designers are hired to create off-brand Helvetica, Futura, Franklin, Times etc. with various weight, width, x-heights and ascenders. If they stick to the spec, you end up with 500 compatible axes. Throw it in an expensive computer with software that can handle a 500 axis font. It's kind of silly but not impossible.0
-
It is like putting all the different kinds of food in the world into a giant blender and turning it on. Let's see tunafish, apple pie, kimchi, a sprinkle of habanera, mmmmm ;-P1
-
I'll also add that in order to interpolate between "anything" and "anything", each glyph needs a huge number of points, and huge numbers of deltas, which effectively does away with all kinds of size-reduction benefits associated with variable fonts. So not only are such omnifonts huge, but also most of the intermediate results are ridiculous.
I think the cost of creating and operating a system that could synthesize all songs in the world would be higher than just serving the plain recordings of all these songs that already exist.
Of course it is and will be possible to use clever outline compatibilization algorithms to serve interpolations between many standalone fonts. But such a server will most likely still just hold the existing fonts, and then compatibilization and interpolation would be attempted only upon request (and then perhaps cached).1 -
Up at Dave Crossland, "Money's got nothing to do with it, just time "
Time and money have nothing to do with it .
And I don't think wrapping billions of vars in a file is the practical solution to most clouds, but I liked it.
A mentioned company already threw infinite cash at a mentioned universal font project, had the technology, tools, detailed parameters, fonts and font library, and still ended up at a cut-du-sac by tossing the parameters, and obeying only the metadata needs of a legacy format or two.
It's much cheaper, faster and close to infinitely more practical today to expose systematically discovered parameters of existing fonts and systematically calculated parameters from variable fonts in one kind of data file, where the only difference is, in existing fonts the discovered values are of discrete instances, and in variable fonts the values calculated are samples from a range.
The format exists to do either the monster font or the gathering of values, up to a point, and then, (for either monstering or gathering), one must go beyond the legacy metadata for the needs of products and services from a parametrically related database of fonts and font families.
2 -
Chris Lozos said:It is like putting all the different kinds of food in the world into a giant blenderAdam Twardoch said:I'll also add that in order to interpolate between "anything" and "anything", each glyph needs a huge number of points, and huge numbers of deltas2
-
Swamp water soda.
1
Categories
- All Categories
- 43 Introductions
- 3.7K Typeface Design
- 805 Font Technology
- 1K Technique and Theory
- 622 Type Business
- 444 Type Design Critiques
- 542 Type Design Software
- 30 Punchcutting
- 137 Lettering and Calligraphy
- 84 Technique and Theory
- 53 Lettering Critiques
- 485 Typography
- 303 History of Typography
- 114 Education
- 68 Resources
- 499 Announcements
- 80 Events
- 105 Job Postings
- 148 Type Releases
- 165 Miscellaneous News
- 270 About TypeDrawers
- 53 TypeDrawers Announcements
- 116 Suggestions and Bug Reports