Encoding / supporting multiple languages

Hi, 
I'm working on a font that an agency want to use in a campaign and different agencies across Europe will be using it. It has to support some central European languages as well western European users. 

I'm currently using Fontlab and have the Default Encoding set. When I need to add in a glyph that doesn't have a spot already there, I switch to MacOS Central Europe, find it there and it add it before switching back to Default Encoding. 

Is the best way to do things? If I continue like this will the font work for someone in Turkey or Poland, for example, in the normal way they would expect?

Any advice would be much appreciated. 

Thanks
Pascal

Comments

  • Better way is to find an encoding file (.enc) that supports the languages you need and put that in the folder App Support > FL5 > Encoding Files (or something like that). You can also make your own encoding file, of course. And use a mapping (.map?) file to set the unicode values for each glyph. All of this can also be find in more detail in the FL manual, I suppose.
  • The user and all related content has been deleted.
  • Igor Freiberger
    Igor Freiberger Posts: 280
    edited October 2016
    What James said. As Latin Pro encoding is probably larger than your actual needs, you can verify each alphabet in Evertype site or Wikipedia and thus add just the necessary characters.

    For Turkish support, you need to handle the i/dotless i issue. The instruction below was given by Adam Twardoch some time ago and you can follow it:
    Currently, the recommended way to deal with Turkish letters is to duplicate "i" as "i.TRK" and place the substitution into the "locl" feature:
    feature locl {<br>language TRK;<br>sub i by i.TRK;<br>} locl;

    For a better understanding about this issue, see Turkish alphabet and a specific discussion about the i/dotless i.

    There are very good instructions regarding how to design the special characters used in Polish (also by Twardoch), Czech and Slovak, and Romanian. Also about Romanian, Bogdan Oancea posted a detailed analisis in another thread.

  • Thank you all, this is great! Much appreciated

  • One question RE the dotless i - do I add the locl feature into the classes panel like this:

  • No. This window is to define classes (groups of glyphs which will be used for kerning, metrics or OpenType features). There, for example, you can make a group of glyphs with the same letterform at one side (v.g., B, D, E, F, H, I, L, M) to adjust the kerning for all them at the same time.

    The <locl> is an OpenType feature. They are set in another window:



    I suggest you to take a look on this good article about OpenType features to get started on this realm.
    ot.jpg 67.2K
  • Benedikt Bramböck
    Benedikt Bramböck Posts: 37
    edited October 2016
    To get started creating your own custom character set, Alphabet Type’s Character Set Builder might also be helpful to you.
  • Great, thanks a lot for that. The Character Set Builder looks really useful. I'm going to try that out. 
  • Excellent tool, Benedikt. Very well done and presented.

    But I do not understand what you call "auxiliary characters". Are them needed to old orthographies or loan words?

    Don't get me wrong – I know it is quite difficult to compile consistent data from so many languages. I did that myself and faced conflicting information most of times. But many of the characters listed as "auxiliary" in Alphabet's site are actually not needed or even used. Underware is adopting a similar classification to construct their Latin Plus character set, with several mistakes.

    In Alphabet's tool, most auxiliary characters for Italian, French, Spanish, Quechua, Portuguese, Galician, and Catalan are not needed. For example: in Portuguese there are 54 characters listed but just ª and º are used.

    I also noted that IJ is lacking for Dutch and ‹ « › » are lacking in Portuguese punctuation (quotes used in European Portuguese).

    Anyway, the tool is amazing. Great job!
  • Swiss German also uses guillemets for quotation marks, FYI.
  • Thanks for the nice words and the feedback.

    At the moment the database relies primarily on Unicode’s CLDR data, which is also where the auxiliary definitions come from. We are aware of some of the pitfalls connected to this source. Through feedback on questionable, wrong and missing information, as well as recommendations for additional sources we are optimistic to improve it in the future.

    Maybe a tool like this can even spark a discussion to get more reliable data into the CLDR.