I'm working on a font that an agency want to use in a campaign and different agencies across Europe will be using it. It has to support some central European languages as well western European users.
I'm currently using Fontlab and have the Default Encoding set. When I need to add in a glyph that doesn't have a spot already there, I switch to MacOS Central Europe, find it there and it add it before switching back to Default Encoding.
Is the best way to do things? If I continue like this will the font work for someone in Turkey or Poland, for example, in the normal way they would expect?
Any advice would be much appreciated.
For Turkish support, you need to handle the i/dotless i issue. The instruction below was given by Adam Twardoch some time ago and you can follow it:
For a better understanding about this issue, see Turkish alphabet and a specific discussion about the i/dotless i.
There are very good instructions regarding how to design the special characters used in Polish (also by Twardoch), Czech and Slovak, and Romanian. Also about Romanian, Bogdan Oancea posted a detailed analisis in another thread.
The <locl> is an OpenType feature. They are set in another window:
I suggest you to take a look on this good article about OpenType features to get started on this realm.
But I do not understand what you call "auxiliary characters". Are them needed to old orthographies or loan words?
Don't get me wrong – I know it is quite difficult to compile consistent data from so many languages. I did that myself and faced conflicting information most of times. But many of the characters listed as "auxiliary" in Alphabet's site are actually not needed or even used. Underware is adopting a similar classification to construct their Latin Plus character set, with several mistakes.
In Alphabet's tool, most auxiliary characters for Italian, French, Spanish, Quechua, Portuguese, Galician, and Catalan are not needed. For example: in Portuguese there are 54 characters listed but just ª and º are used.
I also noted that IJ is lacking for Dutch and ‹ « › » are lacking in Portuguese punctuation (quotes used in European Portuguese).
Anyway, the tool is amazing. Great job!
At the moment the database relies primarily on Unicode’s CLDR data, which is also where the auxiliary definitions come from. We are aware of some of the pitfalls connected to this source. Through feedback on questionable, wrong and missing information, as well as recommendations for additional sources we are optimistic to improve it in the future.
Maybe a tool like this can even spark a discussion to get more reliable data into the CLDR.