Merge remote-tracking branch 'upstream/master'
3
Makefile
|
@ -20,7 +20,8 @@ LDFLAGS = -lm `pkg-config --libs cairo`
|
||||||
PNGQUANTDIR := third_party/pngquant
|
PNGQUANTDIR := third_party/pngquant
|
||||||
PNGQUANT := $(PNGQUANTDIR)/pngquant
|
PNGQUANT := $(PNGQUANTDIR)/pngquant
|
||||||
PNGQUANTFLAGS = --speed 1 --skip-if-larger --quality 85-95 --force
|
PNGQUANTFLAGS = --speed 1 --skip-if-larger --quality 85-95 --force
|
||||||
IMOPS = -size 136x128 canvas:none -compose copy -gravity center
|
BODY_DIMENSIONS = 136x128
|
||||||
|
IMOPS := -size $(BODY_DIMENSIONS) canvas:none -compose copy -gravity center
|
||||||
|
|
||||||
# zopflipng is better (about 5-10%) but much slower. it will be used if
|
# zopflipng is better (about 5-10%) but much slower. it will be used if
|
||||||
# present. pass ZOPFLIPNG= as an arg to make to use optipng instead.
|
# present. pass ZOPFLIPNG= as an arg to make to use optipng instead.
|
||||||
|
|
|
@ -78,7 +78,7 @@
|
||||||
<head>
|
<head>
|
||||||
<!-- Most of this table will be recalculated by the compiler -->
|
<!-- Most of this table will be recalculated by the compiler -->
|
||||||
<tableVersion value="1.0"/>
|
<tableVersion value="1.0"/>
|
||||||
<fontRevision value="1.39"/>
|
<fontRevision value="2.004"/>
|
||||||
<checkSumAdjustment value="0x4d5a161a"/>
|
<checkSumAdjustment value="0x4d5a161a"/>
|
||||||
<magicNumber value="0x5f0f3cf5"/>
|
<magicNumber value="0x5f0f3cf5"/>
|
||||||
<flags value="00000000 00001011"/>
|
<flags value="00000000 00001011"/>
|
||||||
|
@ -246,7 +246,7 @@
|
||||||
Noto Color Emoji
|
Noto Color Emoji
|
||||||
</namerecord>
|
</namerecord>
|
||||||
<namerecord nameID="5" platformID="3" platEncID="1" langID="0x409">
|
<namerecord nameID="5" platformID="3" platEncID="1" langID="0x409">
|
||||||
Version 1.39;GOOG;noto-emoji:20170518:009916646ea7
|
Version 2.004;GOOG;noto-emoji:20180102:8bd8a303c391
|
||||||
</namerecord>
|
</namerecord>
|
||||||
<namerecord nameID="6" platformID="3" platEncID="1" langID="0x409">
|
<namerecord nameID="6" platformID="3" platEncID="1" langID="0x409">
|
||||||
NotoColorEmoji
|
NotoColorEmoji
|
||||||
|
|
30
README.md
|
@ -2,11 +2,10 @@
|
||||||
# Noto Emoji
|
# Noto Emoji
|
||||||
Color and Black-and-White Noto emoji fonts, and tools for working with them.
|
Color and Black-and-White Noto emoji fonts, and tools for working with them.
|
||||||
|
|
||||||
The color version must be built from source.
|
|
||||||
|
|
||||||
## Building NotoColorEmoji
|
## Building NotoColorEmoji
|
||||||
|
|
||||||
Building NotoColorEmoji requires a few files from nototools. Clone a copy from
|
Building NotoColorEmoji currently requires a Python 2.x wide build. To build
|
||||||
|
the emoji font you will require a few files from nototools. Clone a copy from
|
||||||
https://github.com/googlei18n/nototools and either put it in your PYTHONPATH or
|
https://github.com/googlei18n/nototools and either put it in your PYTHONPATH or
|
||||||
use 'python setup.py develop' ('install' currently won't fully install all the
|
use 'python setup.py develop' ('install' currently won't fully install all the
|
||||||
data used by nototools). You will also need fontTools, get it from
|
data used by nototools). You will also need fontTools, get it from
|
||||||
|
@ -20,12 +19,32 @@ font will be at the top level.
|
||||||
## Using NotoColorEmoji
|
## Using NotoColorEmoji
|
||||||
|
|
||||||
NotoColorEmoji uses the CBDT/CBLC color font format, which is supported by Android
|
NotoColorEmoji uses the CBDT/CBLC color font format, which is supported by Android
|
||||||
and Chrome/Chromium OS, but not MacOS. Windows supports it starting with Windows 10
|
and Chrome/Chromium OS, but not macOS. Windows supports it starting with Windows 10
|
||||||
Anniversary Update. No Browser on MacOS supports it, but Edge (on latest Windows)
|
Anniversary Update. No Browser on macOS supports it, but Edge (on latest Windows)
|
||||||
does. Chrome on Linux will support it with some fontconfig tweaking, see
|
does. Chrome on Linux will support it with some fontconfig tweaking, see
|
||||||
[issue #36](https://github.com/googlei18n/noto-emoji/issues/36). Currently we do
|
[issue #36](https://github.com/googlei18n/noto-emoji/issues/36). Currently we do
|
||||||
not build other color font formats.
|
not build other color font formats.
|
||||||
|
|
||||||
|
## Color emoji assets
|
||||||
|
|
||||||
|
The assets provided in the repo are all those used to build the NotoColorEmoji
|
||||||
|
font. Note however that NotoColorEmoji often uses the same assets to represent
|
||||||
|
different character sequences-- notably, most gender-neutral characters or
|
||||||
|
sequences are represented using assets named after one of the gendered
|
||||||
|
sequences. This means that some sequences appear to be missing. Definitions of
|
||||||
|
the aliasing used appear in the emoji_aliases.txt file.
|
||||||
|
|
||||||
|
Also note that the images in the font might differ from the original assets. In
|
||||||
|
particular the flag images in the font are PNG images to which transforms have
|
||||||
|
been applied to standardize the size and generate the wave and border shadow. We
|
||||||
|
do not have SVG versions that reflect these transforms.
|
||||||
|
|
||||||
|
## B/W emoji font
|
||||||
|
|
||||||
|
The black-and-white emoji font is not under active development. Its repertoire of
|
||||||
|
emoji is now several years old, and the design does not reflect the current color
|
||||||
|
emoji design. Currently we have no plans to update this font.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
Emoji fonts (under the fonts subdirectory) are under the
|
Emoji fonts (under the fonts subdirectory) are under the
|
||||||
|
@ -40,5 +59,6 @@ Please read [CONTRIBUTING](CONTRIBUTING.md) if you are thinking of contributing
|
||||||
|
|
||||||
## News
|
## News
|
||||||
|
|
||||||
|
* 2017-09-13: Emoji redesign released.
|
||||||
* 2015-12-09: Unicode 7 and 8 emoji image data (.png format) added.
|
* 2015-12-09: Unicode 7 and 8 emoji image data (.png format) added.
|
||||||
* 2015-09-29: All Noto fonts now licensed under the SIL Open Font License.
|
* 2015-09-29: All Noto fonts now licensed under the SIL Open Font License.
|
||||||
|
|
|
@ -14,6 +14,7 @@
|
||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
import argparse
|
import argparse
|
||||||
import glob
|
import glob
|
||||||
import os
|
import os
|
||||||
|
@ -21,17 +22,20 @@ from os import path
|
||||||
import shutil
|
import shutil
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
|
from nototools import unicode_data
|
||||||
|
|
||||||
"""Create aliases in target directory.
|
"""Create aliases in target directory.
|
||||||
|
|
||||||
The target files should not contain the emoji variation selector
|
In addition to links/copies named with aliased sequences, this can also
|
||||||
codepoint in their names."""
|
create canonically named aliases/copies, if requested."""
|
||||||
|
|
||||||
|
|
||||||
DATA_ROOT = path.dirname(path.abspath(__file__))
|
DATA_ROOT = path.dirname(path.abspath(__file__))
|
||||||
|
|
||||||
def str_to_seq(seq_str):
|
def str_to_seq(seq_str):
|
||||||
res = [int(s, 16) for s in seq_str.split('_')]
|
res = [int(s, 16) for s in seq_str.split('_')]
|
||||||
if 0xfe0f in res:
|
if 0xfe0f in res:
|
||||||
print '0xfe0f in file name: %s' % seq_str
|
print('0xfe0f in file name: %s' % seq_str)
|
||||||
res = [x for x in res if x != 0xfe0f]
|
res = [x for x in res if x != 0xfe0f]
|
||||||
return tuple(res)
|
return tuple(res)
|
||||||
|
|
||||||
|
@ -66,7 +70,7 @@ def read_emoji_aliases(filename):
|
||||||
als_seq = tuple([int(x, 16) for x in als.split('_')])
|
als_seq = tuple([int(x, 16) for x in als.split('_')])
|
||||||
trg_seq = tuple([int(x, 16) for x in trg.split('_')])
|
trg_seq = tuple([int(x, 16) for x in trg.split('_')])
|
||||||
except:
|
except:
|
||||||
print 'cannot process alias %s -> %s' % (als, trg)
|
print('cannot process alias %s -> %s' % (als, trg))
|
||||||
continue
|
continue
|
||||||
result[als_seq] = trg_seq
|
result[als_seq] = trg_seq
|
||||||
return result
|
return result
|
||||||
|
@ -74,15 +78,20 @@ def read_emoji_aliases(filename):
|
||||||
|
|
||||||
def add_aliases(
|
def add_aliases(
|
||||||
srcdir, dstdir, aliasfile, prefix, ext, replace=False, copy=False,
|
srcdir, dstdir, aliasfile, prefix, ext, replace=False, copy=False,
|
||||||
dry_run=False):
|
canonical_names=False, dry_run=False):
|
||||||
"""Use aliasfile to create aliases of files in srcdir matching prefix/ext in
|
"""Use aliasfile to create aliases of files in srcdir matching prefix/ext in
|
||||||
dstdir. If dstdir is null, use srcdir as dstdir. If replace is false
|
dstdir. If dstdir is null, use srcdir as dstdir. If replace is false
|
||||||
and a file already exists in dstdir, report and do nothing. If copy is false
|
and a file already exists in dstdir, report and do nothing. If copy is false
|
||||||
create a symlink, else create a copy. If dry_run is true, report what would
|
create a symlink, else create a copy.
|
||||||
be done. Dstdir will be created if necessary, even if dry_run is true."""
|
|
||||||
|
If canonical_names is true, check all source files and generate aliases/copies
|
||||||
|
using the canonical name if different from the existing name.
|
||||||
|
|
||||||
|
If dry_run is true, report what would be done. Dstdir will be created if
|
||||||
|
necessary, even if dry_run is true."""
|
||||||
|
|
||||||
if not path.isdir(srcdir):
|
if not path.isdir(srcdir):
|
||||||
print >> sys.stderr, '%s is not a directory' % srcdir
|
print('%s is not a directory' % srcdir, file=sys.stderr)
|
||||||
return
|
return
|
||||||
|
|
||||||
if not dstdir:
|
if not dstdir:
|
||||||
|
@ -102,36 +111,62 @@ def add_aliases(
|
||||||
aliases_to_create = {}
|
aliases_to_create = {}
|
||||||
aliases_to_replace = []
|
aliases_to_replace = []
|
||||||
alias_exists = False
|
alias_exists = False
|
||||||
for als, trg in sorted(aliases.items()):
|
|
||||||
if trg not in seq_to_file:
|
def check_alias_seq(seq):
|
||||||
print >> sys.stderr, 'target %s for %s does not exist' % (
|
alias_str = seq_to_str(seq)
|
||||||
seq_to_str(trg), seq_to_str(als))
|
alias_name = '%s%s.%s' % (prefix, alias_str, ext)
|
||||||
continue
|
|
||||||
alias_name = '%s%s.%s' % (prefix, seq_to_str(als), ext)
|
|
||||||
alias_path = path.join(dstdir, alias_name)
|
alias_path = path.join(dstdir, alias_name)
|
||||||
if path.exists(alias_path):
|
if path.exists(alias_path):
|
||||||
if replace:
|
if replace:
|
||||||
aliases_to_replace.append(alias_name)
|
aliases_to_replace.append(alias_name)
|
||||||
else:
|
else:
|
||||||
print >> sys.stderr, 'alias %s exists' % seq_to_str(als)
|
print('alias %s exists' % alias_str, file=sys.stderr)
|
||||||
alias_exists = True
|
alias_exists = True
|
||||||
|
return None
|
||||||
|
return alias_name
|
||||||
|
|
||||||
|
canonical_to_file = {}
|
||||||
|
for als, trg in sorted(aliases.items()):
|
||||||
|
if trg not in seq_to_file:
|
||||||
|
print('target %s for %s does not exist' % (
|
||||||
|
seq_to_str(trg), seq_to_str(als)), file=sys.stderr)
|
||||||
continue
|
continue
|
||||||
|
alias_name = check_alias_seq(als)
|
||||||
|
if alias_name:
|
||||||
target_file = seq_to_file[trg]
|
target_file = seq_to_file[trg]
|
||||||
aliases_to_create[alias_name] = target_file
|
aliases_to_create[alias_name] = target_file
|
||||||
|
if canonical_names:
|
||||||
|
canonical_seq = unicode_data.get_canonical_emoji_sequence(als)
|
||||||
|
if canonical_seq and canonical_seq != als:
|
||||||
|
canonical_alias_name = check_alias_seq(canonical_seq)
|
||||||
|
if canonical_alias_name:
|
||||||
|
canonical_to_file[canonical_alias_name] = target_file
|
||||||
|
|
||||||
|
if canonical_names:
|
||||||
|
print('adding %d canonical aliases' % len(canonical_to_file))
|
||||||
|
for seq, f in seq_to_file.iteritems():
|
||||||
|
canonical_seq = unicode_data.get_canonical_emoji_sequence(seq)
|
||||||
|
if canonical_seq and canonical_seq != seq:
|
||||||
|
alias_name = check_alias_seq(canonical_seq)
|
||||||
|
if alias_name:
|
||||||
|
canonical_to_file[alias_name] = f
|
||||||
|
|
||||||
|
print('adding %d total canonical sequences' % len(canonical_to_file))
|
||||||
|
aliases_to_create.update(canonical_to_file)
|
||||||
|
|
||||||
if replace:
|
if replace:
|
||||||
if not dry_run:
|
if not dry_run:
|
||||||
for k in sorted(aliases_to_replace):
|
for k in sorted(aliases_to_replace):
|
||||||
os.remove(path.join(dstdir, k))
|
os.remove(path.join(dstdir, k))
|
||||||
print 'replacing %d files' % len(aliases_to_replace)
|
print('replacing %d files' % len(aliases_to_replace))
|
||||||
elif alias_exists:
|
elif alias_exists:
|
||||||
print >> sys.stderr, 'aborting, aliases exist.'
|
print('aborting, aliases exist.', file=sys.stderr)
|
||||||
return
|
return
|
||||||
|
|
||||||
for k, v in sorted(aliases_to_create.items()):
|
for k, v in sorted(aliases_to_create.items()):
|
||||||
if dry_run:
|
if dry_run:
|
||||||
msg = 'replace ' if k in aliases_to_replace else ''
|
msg = 'replace ' if k in aliases_to_replace else ''
|
||||||
print '%s%s -> %s' % (msg, k, v)
|
print('%s%s -> %s' % (msg, k, v))
|
||||||
else:
|
else:
|
||||||
try:
|
try:
|
||||||
if copy:
|
if copy:
|
||||||
|
@ -143,10 +178,10 @@ def add_aliases(
|
||||||
else:
|
else:
|
||||||
raise Exception('can\'t create cross-directory symlinks yet')
|
raise Exception('can\'t create cross-directory symlinks yet')
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print >> sys.stderr, 'failed to create %s -> %s' % (k, v)
|
print('failed to create %s -> %s' % (k, v), file=sys.stderr)
|
||||||
raise Exception('oops, ' + str(e))
|
raise Exception('oops, ' + str(e))
|
||||||
print 'created %d %s' % (
|
print('created %d %s' % (
|
||||||
len(aliases_to_create), 'copies' if copy else 'symlinks')
|
len(aliases_to_create), 'copies' if copy else 'symlinks'))
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
@ -165,13 +200,16 @@ def main():
|
||||||
metavar='pfx', default='emoji_u')
|
metavar='pfx', default='emoji_u')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-e', '--ext', help='file name extension (default png)',
|
'-e', '--ext', help='file name extension (default png)',
|
||||||
choices=['ai', 'png', 'sgv'], default='png')
|
choices=['ai', 'png', 'svg'], default='png')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-r', '--replace', help='replace existing files/aliases',
|
'-r', '--replace', help='replace existing files/aliases',
|
||||||
action='store_true')
|
action='store_true')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-c', '--copy', help='create a copy of the file, not a symlink',
|
'-c', '--copy', help='create a copy of the file, not a symlink',
|
||||||
action='store_true')
|
action='store_true')
|
||||||
|
parser.add_argument(
|
||||||
|
'--canonical_names', help='include extra copies with canonical names '
|
||||||
|
'(including fe0f emoji presentation character)', action='store_true');
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-n', '--dry_run', help='print out aliases to create only',
|
'-n', '--dry_run', help='print out aliases to create only',
|
||||||
action='store_true')
|
action='store_true')
|
||||||
|
@ -179,7 +217,7 @@ def main():
|
||||||
|
|
||||||
add_aliases(
|
add_aliases(
|
||||||
args.srcdir, args.dstdir, args.aliasfile, args.prefix, args.ext,
|
args.srcdir, args.dstdir, args.aliasfile, args.prefix, args.ext,
|
||||||
args.replace, args.copy, args.dry_run)
|
args.replace, args.copy, args.canonical_names, args.dry_run)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|
|
@ -16,6 +16,7 @@
|
||||||
# Google Author(s): Doug Felt
|
# Google Author(s): Doug Felt
|
||||||
|
|
||||||
"""Tool to update GSUB, hmtx, cmap, glyf tables with svg image glyphs."""
|
"""Tool to update GSUB, hmtx, cmap, glyf tables with svg image glyphs."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import glob
|
import glob
|
||||||
|
@ -171,7 +172,7 @@ class FontBuilder(object):
|
||||||
self.svgs.append(svg_record)
|
self.svgs.append(svg_record)
|
||||||
|
|
||||||
|
|
||||||
def collect_glyphstr_file_pairs(prefix, ext, include=None, exclude=None):
|
def collect_glyphstr_file_pairs(prefix, ext, include=None, exclude=None, verbosity=1):
|
||||||
"""Scan files with the given prefix and extension, and return a list of
|
"""Scan files with the given prefix and extension, and return a list of
|
||||||
(glyphstr, filename) where glyphstr is the character or ligature, and filename
|
(glyphstr, filename) where glyphstr is the character or ligature, and filename
|
||||||
is the image file associated with it. The glyphstr is formed by decoding the
|
is the image file associated with it. The glyphstr is formed by decoding the
|
||||||
|
@ -199,7 +200,7 @@ def collect_glyphstr_file_pairs(prefix, ext, include=None, exclude=None):
|
||||||
|
|
||||||
if ex and ex.search(image_file):
|
if ex and ex.search(image_file):
|
||||||
if verbosity > 1:
|
if verbosity > 1:
|
||||||
print "Exclude %s" % image_file
|
print("Exclude %s" % image_file)
|
||||||
ex_count += 1
|
ex_count += 1
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"""Compare emoji image file namings against unicode property data."""
|
"""Compare emoji image file namings against unicode property data."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import collections
|
import collections
|
||||||
|
@ -95,9 +96,9 @@ def _check_valid_emoji(sorted_seq_to_filepath):
|
||||||
not_emoji[cp].append(fp)
|
not_emoji[cp].append(fp)
|
||||||
|
|
||||||
if len(not_emoji):
|
if len(not_emoji):
|
||||||
print >> sys.stderr, '%d non-emoji found:' % len(not_emoji)
|
print('%d non-emoji found:' % len(not_emoji), file=sys.stderr)
|
||||||
for cp in sorted(not_emoji):
|
for cp in sorted(not_emoji):
|
||||||
print >> sys.stderr, '%04x (in %s)' % (cp, ', '.join(not_emoji[cp]))
|
print('%04x (in %s)' % (cp, ', '.join(not_emoji[cp])), file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
def _check_zwj(sorted_seq_to_filepath):
|
def _check_zwj(sorted_seq_to_filepath):
|
||||||
|
@ -109,21 +110,21 @@ def _check_zwj(sorted_seq_to_filepath):
|
||||||
if ZWJ not in seq:
|
if ZWJ not in seq:
|
||||||
continue
|
continue
|
||||||
if seq[0] == 0x200d:
|
if seq[0] == 0x200d:
|
||||||
print >> sys.stderr, 'zwj at head of sequence in %s' % fp
|
print('zwj at head of sequence in %s' % fp, file=sys.stderr)
|
||||||
if len(seq) == 1:
|
if len(seq) == 1:
|
||||||
continue
|
continue
|
||||||
if seq[-1] == 0x200d:
|
if seq[-1] == 0x200d:
|
||||||
print >> sys.stderr, 'zwj at end of sequence in %s' % fp
|
print('zwj at end of sequence in %s' % fp, file=sys.stderr)
|
||||||
for i, cp in enumerate(seq):
|
for i, cp in enumerate(seq):
|
||||||
if cp == ZWJ:
|
if cp == ZWJ:
|
||||||
if i > 0:
|
if i > 0:
|
||||||
pcp = seq[i-1]
|
pcp = seq[i-1]
|
||||||
if pcp != EMOJI_PRESENTATION_VS and not unicode_data.is_emoji(pcp):
|
if pcp != EMOJI_PRESENTATION_VS and not unicode_data.is_emoji(pcp):
|
||||||
print >> sys.stderr, 'non-emoji %04x preceeds ZWJ in %s' % (pcp, fp)
|
print('non-emoji %04x preceeds ZWJ in %s' % (pcp, fp), file=sys.stderr)
|
||||||
if i < len(seq) - 1:
|
if i < len(seq) - 1:
|
||||||
fcp = seq[i+1]
|
fcp = seq[i+1]
|
||||||
if not unicode_data.is_emoji(fcp):
|
if not unicode_data.is_emoji(fcp):
|
||||||
print >> sys.stderr, 'non-emoji %04x follows ZWJ in %s' % (fcp, fp)
|
print('non-emoji %04x follows ZWJ in %s' % (fcp, fp), file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
def _check_flags(sorted_seq_to_filepath):
|
def _check_flags(sorted_seq_to_filepath):
|
||||||
|
@ -136,11 +137,11 @@ def _check_flags(sorted_seq_to_filepath):
|
||||||
if have_reg == None:
|
if have_reg == None:
|
||||||
have_reg = is_reg
|
have_reg = is_reg
|
||||||
elif have_reg != is_reg:
|
elif have_reg != is_reg:
|
||||||
print >> sys.stderr, 'mix of regional and non-regional in %s' % fp
|
print('mix of regional and non-regional in %s' % fp, file=sys.stderr)
|
||||||
if have_reg and len(seq) > 2:
|
if have_reg and len(seq) > 2:
|
||||||
# We provide dummy glyphs for regional indicators, so there are sequences
|
# We provide dummy glyphs for regional indicators, so there are sequences
|
||||||
# with single regional indicator symbols.
|
# with single regional indicator symbols.
|
||||||
print >> sys.stderr, 'regional indicator sequence length != 2 in %s' % fp
|
print('regional indicator sequence length != 2 in %s' % fp, file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
def _check_skintone(sorted_seq_to_filepath):
|
def _check_skintone(sorted_seq_to_filepath):
|
||||||
|
@ -153,13 +154,13 @@ def _check_skintone(sorted_seq_to_filepath):
|
||||||
if _is_skintone_modifier(cp):
|
if _is_skintone_modifier(cp):
|
||||||
if i == 0:
|
if i == 0:
|
||||||
if len(seq) > 1:
|
if len(seq) > 1:
|
||||||
print >> sys.stderr, 'skin color selector first in sequence %s' % fp
|
print('skin color selector first in sequence %s' % fp, file=sys.stderr)
|
||||||
# standalone are ok
|
# standalone are ok
|
||||||
continue
|
continue
|
||||||
pcp = seq[i-1]
|
pcp = seq[i-1]
|
||||||
if not unicode_data.is_emoji_modifier_base(pcp):
|
if not unicode_data.is_emoji_modifier_base(pcp):
|
||||||
print >> sys.stderr, (
|
print((
|
||||||
'emoji skintone modifier applied to non-base at %d: %s' % (i, fp))
|
'emoji skintone modifier applied to non-base at %d: %s' % (i, fp)), file=sys.stderr)
|
||||||
elif unicode_data.is_emoji_modifier_base(cp):
|
elif unicode_data.is_emoji_modifier_base(cp):
|
||||||
if i < len(seq) - 1 and _is_skintone_modifier(seq[i+1]):
|
if i < len(seq) - 1 and _is_skintone_modifier(seq[i+1]):
|
||||||
base_to_modifiers[cp].add(seq[i+1])
|
base_to_modifiers[cp].add(seq[i+1])
|
||||||
|
@ -167,9 +168,9 @@ def _check_skintone(sorted_seq_to_filepath):
|
||||||
base_to_modifiers[cp] = set()
|
base_to_modifiers[cp] = set()
|
||||||
for cp, modifiers in sorted(base_to_modifiers.iteritems()):
|
for cp, modifiers in sorted(base_to_modifiers.iteritems()):
|
||||||
if len(modifiers) != 5:
|
if len(modifiers) != 5:
|
||||||
print >> sys.stderr, 'emoji base %04x has %d modifiers defined (%s) in %s' % (
|
print('emoji base %04x has %d modifiers defined (%s) in %s' % (
|
||||||
cp, len(modifiers),
|
cp, len(modifiers),
|
||||||
', '.join('%04x' % cp for cp in sorted(modifiers)), fp)
|
', '.join('%04x' % cp for cp in sorted(modifiers)), fp), file=sys.stderr)
|
||||||
|
|
||||||
|
|
||||||
def _check_zwj_sequences(seq_to_filepath):
|
def _check_zwj_sequences(seq_to_filepath):
|
||||||
|
@ -189,7 +190,7 @@ def _check_zwj_sequences(seq_to_filepath):
|
||||||
for seq, fp in zwj_seq_to_filepath.iteritems():
|
for seq, fp in zwj_seq_to_filepath.iteritems():
|
||||||
if seq not in zwj_sequence_to_name:
|
if seq not in zwj_sequence_to_name:
|
||||||
if seq not in zwj_sequence_without_vs_to_name_canonical:
|
if seq not in zwj_sequence_without_vs_to_name_canonical:
|
||||||
print >> sys.stderr, 'zwj sequence not defined: %s' % fp
|
print('zwj sequence not defined: %s' % fp, file=sys.stderr)
|
||||||
else:
|
else:
|
||||||
_, can = zwj_sequence_without_vs_to_name_canonical[seq]
|
_, can = zwj_sequence_without_vs_to_name_canonical[seq]
|
||||||
# print >> sys.stderr, 'canonical sequence %s contains vs: %s' % (
|
# print >> sys.stderr, 'canonical sequence %s contains vs: %s' % (
|
||||||
|
@ -211,7 +212,7 @@ def read_emoji_aliases():
|
||||||
try:
|
try:
|
||||||
trg_seq = tuple([int(x, 16) for x in trg.split('_')])
|
trg_seq = tuple([int(x, 16) for x in trg.split('_')])
|
||||||
except:
|
except:
|
||||||
print 'cannot process alias %s -> %s' % (als, trg)
|
print('cannot process alias %s -> %s' % (als, trg))
|
||||||
continue
|
continue
|
||||||
result[als_seq] = trg_seq
|
result[als_seq] = trg_seq
|
||||||
return result
|
return result
|
||||||
|
@ -229,11 +230,11 @@ def _check_coverage(seq_to_filepath):
|
||||||
aliases = read_emoji_aliases()
|
aliases = read_emoji_aliases()
|
||||||
for k, v in sorted(aliases.items()):
|
for k, v in sorted(aliases.items()):
|
||||||
if v not in seq_to_filepath and v not in non_vs_to_canonical:
|
if v not in seq_to_filepath and v not in non_vs_to_canonical:
|
||||||
print 'alias %s missing target %s' % (_seq_string(k), _seq_string(v))
|
print('alias %s missing target %s' % (_seq_string(k), _seq_string(v)))
|
||||||
continue
|
continue
|
||||||
if k in seq_to_filepath or k in non_vs_to_canonical:
|
if k in seq_to_filepath or k in non_vs_to_canonical:
|
||||||
print 'alias %s already exists as %s (%s)' % (
|
print('alias %s already exists as %s (%s)' % (
|
||||||
_seq_string(k), _seq_string(v), seq_name(v))
|
_seq_string(k), _seq_string(v), seq_name(v)))
|
||||||
continue
|
continue
|
||||||
filename = seq_to_filepath.get(v) or seq_to_filepath[non_vs_to_canonical[v]]
|
filename = seq_to_filepath.get(v) or seq_to_filepath[non_vs_to_canonical[v]]
|
||||||
seq_to_filepath[k] = 'alias:' + filename
|
seq_to_filepath[k] = 'alias:' + filename
|
||||||
|
@ -242,13 +243,13 @@ def _check_coverage(seq_to_filepath):
|
||||||
emoji = sorted(unicode_data.get_emoji(age=age))
|
emoji = sorted(unicode_data.get_emoji(age=age))
|
||||||
for cp in emoji:
|
for cp in emoji:
|
||||||
if tuple([cp]) not in seq_to_filepath:
|
if tuple([cp]) not in seq_to_filepath:
|
||||||
print 'missing single %04x (%s)' % (cp, unicode_data.name(cp, '<no name>'))
|
print('missing single %04x (%s)' % (cp, unicode_data.name(cp, '<no name>')))
|
||||||
|
|
||||||
# special characters
|
# special characters
|
||||||
# all but combining enclosing keycap are currently marked as emoji
|
# all but combining enclosing keycap are currently marked as emoji
|
||||||
for cp in [ord('*'), ord('#'), ord(u'\u20e3')] + range(0x30, 0x3a):
|
for cp in [ord('*'), ord('#'), ord(u'\u20e3')] + range(0x30, 0x3a):
|
||||||
if cp not in emoji and tuple([cp]) not in seq_to_filepath:
|
if cp not in emoji and tuple([cp]) not in seq_to_filepath:
|
||||||
print 'missing special %04x (%s)' % (cp, unicode_data.name(cp))
|
print('missing special %04x (%s)' % (cp, unicode_data.name(cp)))
|
||||||
|
|
||||||
# combining sequences
|
# combining sequences
|
||||||
comb_seq_to_name = sorted(
|
comb_seq_to_name = sorted(
|
||||||
|
@ -258,22 +259,22 @@ def _check_coverage(seq_to_filepath):
|
||||||
# strip vs and try again
|
# strip vs and try again
|
||||||
non_vs_seq = strip_vs(seq)
|
non_vs_seq = strip_vs(seq)
|
||||||
if non_vs_seq not in seq_to_filepath:
|
if non_vs_seq not in seq_to_filepath:
|
||||||
print 'missing combining sequence %s (%s)' % (_seq_string(seq), name)
|
print('missing combining sequence %s (%s)' % (_seq_string(seq), name))
|
||||||
|
|
||||||
# flag sequences
|
# flag sequences
|
||||||
flag_seq_to_name = sorted(
|
flag_seq_to_name = sorted(
|
||||||
unicode_data.get_emoji_flag_sequences(age=age).iteritems())
|
unicode_data.get_emoji_flag_sequences(age=age).iteritems())
|
||||||
for seq, name in flag_seq_to_name:
|
for seq, name in flag_seq_to_name:
|
||||||
if seq not in seq_to_filepath:
|
if seq not in seq_to_filepath:
|
||||||
print 'missing flag sequence %s (%s)' % (_seq_string(seq), name)
|
print('missing flag sequence %s (%s)' % (_seq_string(seq), name))
|
||||||
|
|
||||||
# skin tone modifier sequences
|
# skin tone modifier sequences
|
||||||
mod_seq_to_name = sorted(
|
mod_seq_to_name = sorted(
|
||||||
unicode_data.get_emoji_modifier_sequences(age=age).iteritems())
|
unicode_data.get_emoji_modifier_sequences(age=age).iteritems())
|
||||||
for seq, name in mod_seq_to_name:
|
for seq, name in mod_seq_to_name:
|
||||||
if seq not in seq_to_filepath:
|
if seq not in seq_to_filepath:
|
||||||
print 'missing modifier sequence %s (%s)' % (
|
print('missing modifier sequence %s (%s)' % (
|
||||||
_seq_string(seq), name)
|
_seq_string(seq), name))
|
||||||
|
|
||||||
# zwj sequences
|
# zwj sequences
|
||||||
# some of ours include the emoji presentation variation selector and some
|
# some of ours include the emoji presentation variation selector and some
|
||||||
|
@ -294,14 +295,14 @@ def _check_coverage(seq_to_filepath):
|
||||||
else:
|
else:
|
||||||
test_seq = seq
|
test_seq = seq
|
||||||
if test_seq not in zwj_seq_without_vs:
|
if test_seq not in zwj_seq_without_vs:
|
||||||
print 'missing (canonical) zwj sequence %s (%s)' % (
|
print('missing (canonical) zwj sequence %s (%s)' % (
|
||||||
_seq_string(seq), name)
|
_seq_string(seq), name))
|
||||||
|
|
||||||
# check for 'unknown flag'
|
# check for 'unknown flag'
|
||||||
# this is either emoji_ufe82b or 'unknown_flag', we filter out things that
|
# this is either emoji_ufe82b or 'unknown_flag', we filter out things that
|
||||||
# don't start with our prefix so 'unknown_flag' would be excluded by default.
|
# don't start with our prefix so 'unknown_flag' would be excluded by default.
|
||||||
if tuple([0xfe82b]) not in seq_to_filepath:
|
if tuple([0xfe82b]) not in seq_to_filepath:
|
||||||
print 'missing unknown flag PUA fe82b'
|
print('missing unknown flag PUA fe82b')
|
||||||
|
|
||||||
|
|
||||||
def check_sequence_to_filepath(seq_to_filepath):
|
def check_sequence_to_filepath(seq_to_filepath):
|
||||||
|
@ -322,7 +323,7 @@ def create_sequence_to_filepath(name_to_dirpath, prefix, suffix):
|
||||||
result = {}
|
result = {}
|
||||||
for name, dirname in name_to_dirpath.iteritems():
|
for name, dirname in name_to_dirpath.iteritems():
|
||||||
if not name.startswith(prefix):
|
if not name.startswith(prefix):
|
||||||
print 'expected prefix "%s" for "%s"' % (prefix, name)
|
print('expected prefix "%s" for "%s"' % (prefix, name))
|
||||||
continue
|
continue
|
||||||
|
|
||||||
segments = name[len(prefix): -len(suffix)].split('_')
|
segments = name[len(prefix): -len(suffix)].split('_')
|
||||||
|
@ -330,12 +331,12 @@ def create_sequence_to_filepath(name_to_dirpath, prefix, suffix):
|
||||||
seq = []
|
seq = []
|
||||||
for s in segments:
|
for s in segments:
|
||||||
if not segment_re.match(s):
|
if not segment_re.match(s):
|
||||||
print 'bad codepoint name "%s" in %s/%s' % (s, dirname, name)
|
print('bad codepoint name "%s" in %s/%s' % (s, dirname, name))
|
||||||
segfail = True
|
segfail = True
|
||||||
continue
|
continue
|
||||||
n = int(s, 16)
|
n = int(s, 16)
|
||||||
if n > 0x10ffff:
|
if n > 0x10ffff:
|
||||||
print 'codepoint "%s" out of range in %s/%s' % (s, dirname, name)
|
print('codepoint "%s" out of range in %s/%s' % (s, dirname, name))
|
||||||
segfail = True
|
segfail = True
|
||||||
continue
|
continue
|
||||||
seq.append(n)
|
seq.append(n)
|
||||||
|
@ -356,8 +357,8 @@ def collect_name_to_dirpath(directory, prefix, suffix):
|
||||||
if not f.endswith(suffix):
|
if not f.endswith(suffix):
|
||||||
continue
|
continue
|
||||||
if f in result:
|
if f in result:
|
||||||
print >> sys.stderr, 'duplicate file "%s" in %s and %s ' % (
|
print('duplicate file "%s" in %s and %s ' % (
|
||||||
f, dirname, result[f])
|
f, dirname, result[f]), file=sys.stderr)
|
||||||
continue
|
continue
|
||||||
result[f] = dirname
|
result[f] = dirname
|
||||||
return result
|
return result
|
||||||
|
@ -375,15 +376,15 @@ def collect_name_to_dirpath_with_override(dirs, prefix, suffix):
|
||||||
|
|
||||||
|
|
||||||
def run_check(dirs, prefix, suffix):
|
def run_check(dirs, prefix, suffix):
|
||||||
print 'Checking files with prefix "%s" and suffix "%s" in:\n %s' % (
|
print('Checking files with prefix "%s" and suffix "%s" in:\n %s' % (
|
||||||
prefix, suffix, '\n '.join(dirs))
|
prefix, suffix, '\n '.join(dirs)))
|
||||||
name_to_dirpath = collect_name_to_dirpath_with_override(
|
name_to_dirpath = collect_name_to_dirpath_with_override(
|
||||||
dirs, prefix=prefix, suffix=suffix)
|
dirs, prefix=prefix, suffix=suffix)
|
||||||
print 'checking %d names' % len(name_to_dirpath)
|
print('checking %d names' % len(name_to_dirpath))
|
||||||
seq_to_filepath = create_sequence_to_filepath(name_to_dirpath, prefix, suffix)
|
seq_to_filepath = create_sequence_to_filepath(name_to_dirpath, prefix, suffix)
|
||||||
print 'checking %d sequences' % len(seq_to_filepath)
|
print('checking %d sequences' % len(seq_to_filepath))
|
||||||
check_sequence_to_filepath(seq_to_filepath)
|
check_sequence_to_filepath(seq_to_filepath)
|
||||||
print 'done.'
|
print('done.')
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
|
|
@ -96,14 +96,14 @@ def copy_with_rename(src_dir, dst_dir, accept_pred=None, rename=None):
|
||||||
|
|
||||||
|
|
||||||
def build_svg_dir(dst_dir, clean=False, emoji_dir='', flags_dir=''):
|
def build_svg_dir(dst_dir, clean=False, emoji_dir='', flags_dir=''):
|
||||||
"""Copies/renames files from emoji_dir and then flag_dir, giving them the
|
"""Copies/renames files from emoji_dir and then flags_dir, giving them the
|
||||||
standard format and prefix ('emoji_u' followed by codepoints expressed in hex
|
standard format and prefix ('emoji_u' followed by codepoints expressed in hex
|
||||||
separated by underscore). If clean, removes the target dir before proceding.
|
separated by underscore). If clean, removes the target dir before proceding.
|
||||||
If either emoji_dir or flag_dir are empty, skips them."""
|
If either emoji_dir or flags_dir are empty, skips them."""
|
||||||
|
|
||||||
dst_dir = tool_utils.ensure_dir_exists(dst_dir, clean=clean)
|
dst_dir = tool_utils.ensure_dir_exists(dst_dir, clean=clean)
|
||||||
|
|
||||||
if not emoji_dir and not flag_dir:
|
if not emoji_dir and not flags_dir:
|
||||||
logging.warning('Nothing to do.')
|
logging.warning('Nothing to do.')
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"""Generate a glyph name for flag emojis."""
|
"""Generate a glyph name for flag emojis."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
|
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
|
||||||
|
|
||||||
|
@ -48,8 +49,8 @@ def flag_code_to_glyph_name(flag_code):
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
print ' '.join([
|
print(' '.join([
|
||||||
flag_code_to_glyph_name(flag_code) for flag_code in sys.argv[1:]])
|
flag_code_to_glyph_name(flag_code) for flag_code in sys.argv[1:]]))
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
main()
|
main()
|
||||||
|
|
13
flag_info.py
|
@ -17,6 +17,7 @@
|
||||||
"""Quick tool to display count/ids of flag images in a directory named
|
"""Quick tool to display count/ids of flag images in a directory named
|
||||||
either using ASCII upper case pairs or the emoji_u+codepoint_sequence
|
either using ASCII upper case pairs or the emoji_u+codepoint_sequence
|
||||||
names."""
|
names."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import re
|
import re
|
||||||
|
@ -44,7 +45,7 @@ def _flag_names_from_file_names(src):
|
||||||
for f in glob.glob(path.join(src, '*.png')):
|
for f in glob.glob(path.join(src, '*.png')):
|
||||||
m = flag_re.match(path.basename(f))
|
m = flag_re.match(path.basename(f))
|
||||||
if not m:
|
if not m:
|
||||||
print 'no match'
|
print('no match')
|
||||||
continue
|
continue
|
||||||
flags.add(m.group(1))
|
flags.add(m.group(1))
|
||||||
return flags
|
return flags
|
||||||
|
@ -52,14 +53,14 @@ def _flag_names_from_file_names(src):
|
||||||
|
|
||||||
def _dump_flag_info(names):
|
def _dump_flag_info(names):
|
||||||
prev = None
|
prev = None
|
||||||
print '%d flags' % len(names)
|
print('%d flags' % len(names))
|
||||||
for n in sorted(names):
|
for n in sorted(names):
|
||||||
if n[0] != prev:
|
if n[0] != prev:
|
||||||
if prev:
|
if prev:
|
||||||
print
|
print()
|
||||||
prev = n[0]
|
prev = n[0]
|
||||||
print n,
|
print(n, end=' ')
|
||||||
print
|
print()
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
@ -76,7 +77,7 @@ def main():
|
||||||
names = _flag_names_from_file_names(args.srcdir)
|
names = _flag_names_from_file_names(args.srcdir)
|
||||||
else:
|
else:
|
||||||
names = _flag_names_from_emoji_file_names(args.srcdir)
|
names = _flag_names_from_emoji_file_names(args.srcdir)
|
||||||
print args.srcdir
|
print(args.srcdir)
|
||||||
_dump_flag_info(names)
|
_dump_flag_info(names)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -19,6 +19,7 @@
|
||||||
This takes a list of directories containing emoji image files, and
|
This takes a list of directories containing emoji image files, and
|
||||||
builds an html page presenting the images along with their composition
|
builds an html page presenting the images along with their composition
|
||||||
(for sequences) and unicode names (for individual emoji)."""
|
(for sequences) and unicode names (for individual emoji)."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import codecs
|
import codecs
|
||||||
|
@ -109,11 +110,11 @@ def _get_desc(key_tuple, aliases, dir_infos, basepaths):
|
||||||
if cp_key in aliases:
|
if cp_key in aliases:
|
||||||
fp = get_key_filepath(aliases[cp_key])
|
fp = get_key_filepath(aliases[cp_key])
|
||||||
else:
|
else:
|
||||||
print 'no alias for %s' % unicode_data.seq_to_string(cp_key)
|
print('no alias for %s' % unicode_data.seq_to_string(cp_key))
|
||||||
if not fp:
|
if not fp:
|
||||||
print 'no part for %s in %s' % (
|
print('no part for %s in %s' % (
|
||||||
unicode_data.seq_to_string(cp_key),
|
unicode_data.seq_to_string(cp_key),
|
||||||
unicode_data.seq_to_string(key_tuple))
|
unicode_data.seq_to_string(key_tuple)))
|
||||||
return fp
|
return fp
|
||||||
|
|
||||||
def _get_part(cp):
|
def _get_part(cp):
|
||||||
|
@ -153,7 +154,7 @@ def _get_name(key_tuple, annotations):
|
||||||
elif key_tuple == (0xfe82b,):
|
elif key_tuple == (0xfe82b,):
|
||||||
seq_name = '(unknown flag PUA codepoint)'
|
seq_name = '(unknown flag PUA codepoint)'
|
||||||
else:
|
else:
|
||||||
print 'no name for %s' % unicode_data.seq_to_string(key_tuple)
|
print('no name for %s' % unicode_data.seq_to_string(key_tuple))
|
||||||
seq_name = '(oops)'
|
seq_name = '(oops)'
|
||||||
return CELL_PREFIX + seq_name
|
return CELL_PREFIX + seq_name
|
||||||
|
|
||||||
|
@ -308,8 +309,8 @@ def _get_image_data(image_dir, ext, prefix):
|
||||||
continue
|
continue
|
||||||
result[cps] = filename
|
result[cps] = filename
|
||||||
if fails:
|
if fails:
|
||||||
print >> sys.stderr, 'get_image_data failed (%s, %s, %s):\n %s' % (
|
print('get_image_data failed (%s, %s, %s):\n %s' % (
|
||||||
image_dir, ext, prefix, '\n '.join(fails))
|
image_dir, ext, prefix, '\n '.join(fails)), file=sys.stderr)
|
||||||
raise ValueError('get image data failed')
|
raise ValueError('get image data failed')
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
@ -356,9 +357,9 @@ def _add_aliases(keys, aliases):
|
||||||
v_str = unicode_data.seq_to_string(v)
|
v_str = unicode_data.seq_to_string(v)
|
||||||
if k in keys:
|
if k in keys:
|
||||||
msg = '' if v in keys else ' but it\'s not present'
|
msg = '' if v in keys else ' but it\'s not present'
|
||||||
print 'have alias image %s, should use %s%s' % (k_str, v_str, msg)
|
print('have alias image %s, should use %s%s' % (k_str, v_str, msg))
|
||||||
elif v not in keys:
|
elif v not in keys:
|
||||||
print 'can\'t use alias %s, no image matching %s' % (k_str, v_str)
|
print('can\'t use alias %s, no image matching %s' % (k_str, v_str))
|
||||||
to_add = {k for k, v in aliases.iteritems() if k not in keys and v in keys}
|
to_add = {k for k, v in aliases.iteritems() if k not in keys and v in keys}
|
||||||
return keys | to_add
|
return keys | to_add
|
||||||
|
|
||||||
|
@ -449,9 +450,9 @@ def _instantiate_template(template, arg_dict):
|
||||||
keyset = set(arg_dict.keys())
|
keyset = set(arg_dict.keys())
|
||||||
extra_args = keyset - ids
|
extra_args = keyset - ids
|
||||||
if extra_args:
|
if extra_args:
|
||||||
print >> sys.stderr, (
|
print((
|
||||||
'the following %d args are unused:\n%s' %
|
'the following %d args are unused:\n%s' %
|
||||||
(len(extra_args), ', '.join(sorted(extra_args))))
|
(len(extra_args), ', '.join(sorted(extra_args)))), file=sys.stderr)
|
||||||
return string.Template(template).substitute(arg_dict)
|
return string.Template(template).substitute(arg_dict)
|
||||||
|
|
||||||
|
|
||||||
|
@ -605,7 +606,7 @@ def main():
|
||||||
file_parts = path.splitext(args.outfile)
|
file_parts = path.splitext(args.outfile)
|
||||||
if file_parts[1] != '.html':
|
if file_parts[1] != '.html':
|
||||||
args.outfile = file_parts[0] + '.html'
|
args.outfile = file_parts[0] + '.html'
|
||||||
print 'added .html extension to filename:\n%s' % args.outfile
|
print('added .html extension to filename:\n%s' % args.outfile)
|
||||||
|
|
||||||
if args.annotate:
|
if args.annotate:
|
||||||
annotations = _parse_annotation_file(args.annotate)
|
annotations = _parse_annotation_file(args.annotate)
|
||||||
|
|
|
@ -16,6 +16,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"""Generate name data for emoji resources. Currently in json format."""
|
"""Generate name data for emoji resources. Currently in json format."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import collections
|
import collections
|
||||||
|
@ -273,20 +274,38 @@ def _name_data(seq, seq_file):
|
||||||
|
|
||||||
|
|
||||||
def generate_names(
|
def generate_names(
|
||||||
src_dir, dst_dir, skip_limit=20, pretty_print=False, verbose=False):
|
src_dir, dst_dir, skip_limit=20, omit_groups=None, pretty_print=False,
|
||||||
|
verbose=False):
|
||||||
srcdir = tool_utils.resolve_path(src_dir)
|
srcdir = tool_utils.resolve_path(src_dir)
|
||||||
if not path.isdir(srcdir):
|
if not path.isdir(srcdir):
|
||||||
print >> sys.stderr, '%s is not a directory' % src_dir
|
print('%s is not a directory' % src_dir, file=sys.stderr)
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if omit_groups:
|
||||||
|
unknown_groups = set(omit_groups) - set(unicode_data.get_emoji_groups())
|
||||||
|
if unknown_groups:
|
||||||
|
print('did not recognize %d group%s: %s' % (
|
||||||
|
len(unknown_groups), '' if len(unknown_groups) == 1 else 's',
|
||||||
|
', '.join('"%s"' % g for g in omit_groups if g in unknown_groups)), file=sys.stderr)
|
||||||
|
print('valid groups are:\n %s' % (
|
||||||
|
'\n '.join(g for g in unicode_data.get_emoji_groups())), file=sys.stderr)
|
||||||
|
return
|
||||||
|
print('omitting %d group%s: %s' % (
|
||||||
|
len(omit_groups), '' if len(omit_groups) == 1 else 's',
|
||||||
|
', '.join('"%s"' % g for g in omit_groups)))
|
||||||
|
else:
|
||||||
|
# might be None
|
||||||
|
print('keeping all groups')
|
||||||
|
omit_groups = []
|
||||||
|
|
||||||
# make sure the destination exists
|
# make sure the destination exists
|
||||||
dstdir = tool_utils.ensure_dir_exists(
|
dstdir = tool_utils.ensure_dir_exists(
|
||||||
tool_utils.resolve_path(dst_dir))
|
tool_utils.resolve_path(dst_dir))
|
||||||
|
|
||||||
# _get_image_data returns canonical cp sequences
|
# _get_image_data returns canonical cp sequences
|
||||||
print 'src dir:', srcdir
|
print('src dir:', srcdir)
|
||||||
seq_to_file = generate_emoji_html._get_image_data(srcdir, 'png', 'emoji_u')
|
seq_to_file = generate_emoji_html._get_image_data(srcdir, 'png', 'emoji_u')
|
||||||
print 'seq to file has %d sequences' % len(seq_to_file)
|
print('seq to file has %d sequences' % len(seq_to_file))
|
||||||
|
|
||||||
# Aliases add non-gendered versions using gendered images for the most part.
|
# Aliases add non-gendered versions using gendered images for the most part.
|
||||||
# But when we display the images, we don't distinguish genders in the
|
# But when we display the images, we don't distinguish genders in the
|
||||||
|
@ -310,9 +329,9 @@ def generate_names(
|
||||||
if unicode_data.is_regional_indicator_seq(seq):
|
if unicode_data.is_regional_indicator_seq(seq):
|
||||||
replace_seq = canonical_aliases[seq]
|
replace_seq = canonical_aliases[seq]
|
||||||
if seq in seq_to_file:
|
if seq in seq_to_file:
|
||||||
print 'warning, alias %s has file %s' % (
|
print('warning, alias %s has file %s' % (
|
||||||
unicode_data.regional_indicator_seq_to_string(seq),
|
unicode_data.regional_indicator_seq_to_string(seq),
|
||||||
seq_to_file[seq])
|
seq_to_file[seq]))
|
||||||
continue
|
continue
|
||||||
replace_file = seq_to_file.get(replace_seq)
|
replace_file = seq_to_file.get(replace_seq)
|
||||||
if replace_file:
|
if replace_file:
|
||||||
|
@ -323,6 +342,8 @@ def generate_names(
|
||||||
last_skipped_group = None
|
last_skipped_group = None
|
||||||
skipcount = 0
|
skipcount = 0
|
||||||
for group in unicode_data.get_emoji_groups():
|
for group in unicode_data.get_emoji_groups():
|
||||||
|
if group in omit_groups:
|
||||||
|
continue
|
||||||
name_data = []
|
name_data = []
|
||||||
for seq in unicode_data.get_emoji_in_group(group):
|
for seq in unicode_data.get_emoji_in_group(group):
|
||||||
if seq in excluded:
|
if seq in excluded:
|
||||||
|
@ -332,11 +353,11 @@ def generate_names(
|
||||||
skipcount += 1
|
skipcount += 1
|
||||||
if verbose:
|
if verbose:
|
||||||
if group != last_skipped_group:
|
if group != last_skipped_group:
|
||||||
print 'group %s' % group
|
print('group %s' % group)
|
||||||
last_skipped_group = group
|
last_skipped_group = group
|
||||||
print ' %s (%s)' % (
|
print(' %s (%s)' % (
|
||||||
unicode_data.seq_to_string(seq),
|
unicode_data.seq_to_string(seq),
|
||||||
', '.join(unicode_data.name(cp, 'x') for cp in seq))
|
', '.join(unicode_data.name(cp, 'x') for cp in seq)))
|
||||||
if skip_limit >= 0 and skipcount > skip_limit:
|
if skip_limit >= 0 and skipcount > skip_limit:
|
||||||
raise Exception('skipped too many items')
|
raise Exception('skipped too many items')
|
||||||
else:
|
else:
|
||||||
|
@ -348,7 +369,7 @@ def generate_names(
|
||||||
indent = 2 if pretty_print else None
|
indent = 2 if pretty_print else None
|
||||||
separators = None if pretty_print else (',', ':')
|
separators = None if pretty_print else (',', ':')
|
||||||
json.dump(data, f, indent=indent, separators=separators)
|
json.dump(data, f, indent=indent, separators=separators)
|
||||||
print 'wrote %s' % outfile
|
print('wrote %s' % outfile)
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
@ -368,12 +389,15 @@ def main():
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-m', '--missing_limit', help='number of missing images before failure '
|
'-m', '--missing_limit', help='number of missing images before failure '
|
||||||
'(default 20), use -1 for no limit', metavar='n', default=20)
|
'(default 20), use -1 for no limit', metavar='n', default=20)
|
||||||
|
parser.add_argument(
|
||||||
|
'--omit_groups', help='names of groups to omit (default "Misc")',
|
||||||
|
metavar='name', default=['Misc'], nargs='*')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-v', '--verbose', help='print progress information to stdout',
|
'-v', '--verbose', help='print progress information to stdout',
|
||||||
action='store_true')
|
action='store_true')
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
generate_names(
|
generate_names(
|
||||||
args.srcdir, args.dstdir, args.missing_limit,
|
args.srcdir, args.dstdir, args.missing_limit, args.omit_groups,
|
||||||
pretty_print=args.pretty_print, verbose=args.verbose)
|
pretty_print=args.pretty_print, verbose=args.verbose)
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
|
from __future__ import print_function
|
||||||
import os
|
import os
|
||||||
from os import path
|
from os import path
|
||||||
import subprocess
|
import subprocess
|
||||||
|
@ -5,7 +6,7 @@ import subprocess
|
||||||
OUTPUT_DIR = '/tmp/placeholder_emoji'
|
OUTPUT_DIR = '/tmp/placeholder_emoji'
|
||||||
|
|
||||||
def generate_image(name, text):
|
def generate_image(name, text):
|
||||||
print name, text.replace('\n', '_')
|
print(name, text.replace('\n', '_'))
|
||||||
subprocess.check_call(
|
subprocess.check_call(
|
||||||
['convert', '-size', '100x100', 'label:%s' % text,
|
['convert', '-size', '100x100', 'label:%s' % text,
|
||||||
'%s/%s' % (OUTPUT_DIR, name)])
|
'%s/%s' % (OUTPUT_DIR, name)])
|
||||||
|
@ -75,13 +76,13 @@ with open('sequences.txt', 'r') as f:
|
||||||
elif is_flag_sequence(values):
|
elif is_flag_sequence(values):
|
||||||
text = ''.join(regional_to_ascii(cp) for cp in values)
|
text = ''.join(regional_to_ascii(cp) for cp in values)
|
||||||
elif has_color_patch(values):
|
elif has_color_patch(values):
|
||||||
print 'skipping color patch sequence %s' % seq
|
print('skipping color patch sequence %s' % seq)
|
||||||
elif is_keycap_sequence(values):
|
elif is_keycap_sequence(values):
|
||||||
text = get_keycap_text(values)
|
text = get_keycap_text(values)
|
||||||
else:
|
else:
|
||||||
text = get_combining_text(values)
|
text = get_combining_text(values)
|
||||||
if not text:
|
if not text:
|
||||||
print 'missing %s' % seq
|
print('missing %s' % seq)
|
||||||
|
|
||||||
if text:
|
if text:
|
||||||
if len(text) > 3:
|
if len(text) > 3:
|
||||||
|
|
|
@ -36,12 +36,19 @@ from nototools import unicode_data
|
||||||
|
|
||||||
logger = logging.getLogger('emoji_thumbnails')
|
logger = logging.getLogger('emoji_thumbnails')
|
||||||
|
|
||||||
def create_thumbnail(src_path, dst_path):
|
def create_thumbnail(src_path, dst_path, crop):
|
||||||
# uses imagemagik
|
# Uses imagemagik
|
||||||
# we need imagex exactly 72x72 in size, with transparent background
|
# We need images exactly 72x72 in size, with transparent background.
|
||||||
subprocess.check_call([
|
# Remove 4-pixel LR margins from 136x128 source images if we crop.
|
||||||
|
if crop:
|
||||||
|
cmd = [
|
||||||
|
'convert', src_path, '-crop', '128x128+4+0!', '-thumbnail', '72x72',
|
||||||
|
'PNG32:' + dst_path]
|
||||||
|
else:
|
||||||
|
cmd = [
|
||||||
'convert', '-thumbnail', '72x72', '-gravity', 'center', '-background',
|
'convert', '-thumbnail', '72x72', '-gravity', 'center', '-background',
|
||||||
'none', '-extent', '72x72', src_path, dst_path])
|
'none', '-extent', '72x72', src_path, 'PNG32:' + dst_path]
|
||||||
|
subprocess.check_call(cmd)
|
||||||
|
|
||||||
|
|
||||||
def get_inv_aliases():
|
def get_inv_aliases():
|
||||||
|
@ -77,14 +84,16 @@ def sequence_to_filename(seq, prefix, suffix):
|
||||||
return ''.join((prefix, unicode_data.seq_to_string(seq), suffix))
|
return ''.join((prefix, unicode_data.seq_to_string(seq), suffix))
|
||||||
|
|
||||||
|
|
||||||
def create_thumbnails_and_aliases(src_dir, dst_dir, dst_prefix):
|
def create_thumbnails_and_aliases(src_dir, dst_dir, crop, dst_prefix):
|
||||||
"""Creates thumbnails in dst_dir based on sources in src.dir, using
|
"""Creates thumbnails in dst_dir based on sources in src.dir, using
|
||||||
dst_prefix. Assumes the source prefix is 'emoji_u' and the common suffix
|
dst_prefix. Assumes the source prefix is 'emoji_u' and the common suffix
|
||||||
is '.png'."""
|
is '.png'."""
|
||||||
|
|
||||||
|
src_dir = tool_utils.resolve_path(src_dir)
|
||||||
if not path.isdir(src_dir):
|
if not path.isdir(src_dir):
|
||||||
raise ValueError('"%s" is not a directory')
|
raise ValueError('"%s" is not a directory')
|
||||||
dst_dir = tool_utils.ensure_dir_exists(dst_dir)
|
|
||||||
|
dst_dir = tool_utils.ensure_dir_exists(tool_utils.resolve_path(dst_dir))
|
||||||
|
|
||||||
src_prefix = 'emoji_u'
|
src_prefix = 'emoji_u'
|
||||||
suffix = '.png'
|
suffix = '.png'
|
||||||
|
@ -104,8 +113,9 @@ def create_thumbnails_and_aliases(src_dir, dst_dir, dst_prefix):
|
||||||
dst_file = sequence_to_filename(seq, dst_prefix, suffix)
|
dst_file = sequence_to_filename(seq, dst_prefix, suffix)
|
||||||
dst_path = path.join(dst_dir, dst_file)
|
dst_path = path.join(dst_dir, dst_file)
|
||||||
|
|
||||||
create_thumbnail(src_path, dst_path)
|
create_thumbnail(src_path, dst_path, crop)
|
||||||
logger.info('wrote thumbnail: %s' % dst_file)
|
logger.info('wrote thumbnail%s: %s' % (
|
||||||
|
' with crop' if crop else '', dst_file))
|
||||||
|
|
||||||
for alias_seq in inv_aliases.get(seq, ()):
|
for alias_seq in inv_aliases.get(seq, ()):
|
||||||
alias_file = sequence_to_filename(alias_seq, dst_prefix, suffix)
|
alias_file = sequence_to_filename(alias_seq, dst_prefix, suffix)
|
||||||
|
@ -115,15 +125,22 @@ def create_thumbnails_and_aliases(src_dir, dst_dir, dst_prefix):
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
SRC_DEFAULT = '[emoji]/build/compressed_pngs'
|
||||||
|
PREFIX_DEFAULT = 'android_'
|
||||||
|
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-s', '--src_dir', help='source images', metavar='dir', required=True)
|
'-s', '--src_dir', help='source images (default \'%s\')' % SRC_DEFAULT,
|
||||||
|
default=SRC_DEFAULT, metavar='dir')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-d', '--dst_dir', help='destination directory', metavar='dir',
|
'-d', '--dst_dir', help='destination directory', metavar='dir',
|
||||||
required=True)
|
required=True)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-p', '--prefix', help='prefix for thumbnail', metavar='str',
|
'-p', '--prefix', help='prefix for thumbnail (default \'%s\')' %
|
||||||
default='android_')
|
PREFIX_DEFAULT, default=PREFIX_DEFAULT, metavar='str')
|
||||||
|
parser.add_argument(
|
||||||
|
'-c', '--crop', help='crop images (will automatically crop if '
|
||||||
|
'src dir is the default)', action='store_true')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-v', '--verbose', help='write log output', metavar='level',
|
'-v', '--verbose', help='write log output', metavar='level',
|
||||||
choices='warning info debug'.split(), const='info',
|
choices='warning info debug'.split(), const='info',
|
||||||
|
@ -133,8 +150,9 @@ def main():
|
||||||
if args.verbose is not None:
|
if args.verbose is not None:
|
||||||
logging.basicConfig(level=getattr(logging, args.verbose.upper()))
|
logging.basicConfig(level=getattr(logging, args.verbose.upper()))
|
||||||
|
|
||||||
|
crop = args.crop or (args.src_dir == SRC_DEFAULT)
|
||||||
create_thumbnails_and_aliases(
|
create_thumbnails_and_aliases(
|
||||||
args.src_dir, args.dst_dir, args.prefix)
|
args.src_dir, args.dst_dir, crop, args.prefix)
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
#
|
#
|
||||||
# Google Author(s): Doug Felt
|
# Google Author(s): Doug Felt
|
||||||
|
|
||||||
|
from __future__ import print_function
|
||||||
import argparse
|
import argparse
|
||||||
import os
|
import os
|
||||||
import os.path
|
import os.path
|
||||||
|
@ -120,9 +121,9 @@ View using Firefox 26 and later.
|
||||||
text_parts.append(text)
|
text_parts.append(text)
|
||||||
|
|
||||||
if verbosity and glyph and not found_initial_glyph:
|
if verbosity and glyph and not found_initial_glyph:
|
||||||
print "Did not find glyph '%s', using initial glyph '%s'" % (glyph, initial_glyph_str)
|
print("Did not find glyph '%s', using initial glyph '%s'" % (glyph, initial_glyph_str))
|
||||||
elif verbosity > 1 and not glyph:
|
elif verbosity > 1 and not glyph:
|
||||||
print "Using initial glyph '%s'" % initial_glyph_str
|
print("Using initial glyph '%s'" % initial_glyph_str)
|
||||||
|
|
||||||
lines = [header % font_name]
|
lines = [header % font_name]
|
||||||
lines.append(body_head % {'font':font_name, 'glyph':initial_glyph_str,
|
lines.append(body_head % {'font':font_name, 'glyph':initial_glyph_str,
|
||||||
|
@ -133,28 +134,28 @@ View using Firefox 26 and later.
|
||||||
with open(html_name, 'w') as fp:
|
with open(html_name, 'w') as fp:
|
||||||
fp.write(output)
|
fp.write(output)
|
||||||
if verbosity:
|
if verbosity:
|
||||||
print 'Wrote ' + html_name
|
print('Wrote ' + html_name)
|
||||||
|
|
||||||
|
|
||||||
def do_generate_fonts(template_file, font_basename, pairs, reuse=0, verbosity=1):
|
def do_generate_fonts(template_file, font_basename, pairs, reuse=0, verbosity=1):
|
||||||
out_woff = font_basename + '.woff'
|
out_woff = font_basename + '.woff'
|
||||||
if reuse > 1 and os.path.isfile(out_woff) and os.access(out_woff, os.R_OK):
|
if reuse > 1 and os.path.isfile(out_woff) and os.access(out_woff, os.R_OK):
|
||||||
if verbosity:
|
if verbosity:
|
||||||
print 'Reusing ' + out_woff
|
print('Reusing ' + out_woff)
|
||||||
return
|
return
|
||||||
|
|
||||||
out_ttx = font_basename + '.ttx'
|
out_ttx = font_basename + '.ttx'
|
||||||
if reuse == 0:
|
if reuse == 0:
|
||||||
add_svg_glyphs.add_image_glyphs(template_file, out_ttx, pairs, verbosity=verbosity)
|
add_svg_glyphs.add_image_glyphs(template_file, out_ttx, pairs, verbosity=verbosity)
|
||||||
elif verbosity:
|
elif verbosity:
|
||||||
print 'Reusing ' + out_ttx
|
print('Reusing ' + out_ttx)
|
||||||
|
|
||||||
quiet=verbosity < 2
|
quiet=verbosity < 2
|
||||||
font = ttx.TTFont(flavor='woff', quiet=quiet)
|
font = ttx.TTFont(flavor='woff', quiet=quiet)
|
||||||
font.importXML(out_ttx, quiet=quiet)
|
font.importXML(out_ttx, quiet=quiet)
|
||||||
font.save(out_woff)
|
font.save(out_woff)
|
||||||
if verbosity:
|
if verbosity:
|
||||||
print 'Wrote ' + out_woff
|
print('Wrote ' + out_woff)
|
||||||
|
|
||||||
|
|
||||||
def main(argv):
|
def main(argv):
|
||||||
|
@ -193,7 +194,7 @@ def main(argv):
|
||||||
if not out_basename:
|
if not out_basename:
|
||||||
out_basename = args.template_file.split('.')[0] # exclude e.g. '.tmpl.ttx'
|
out_basename = args.template_file.split('.')[0] # exclude e.g. '.tmpl.ttx'
|
||||||
if args.v:
|
if args.v:
|
||||||
print "Output basename is %s." % out_basename
|
print("Output basename is %s." % out_basename)
|
||||||
do_generate_fonts(args.template_file, out_basename, pairs, reuse=args.reuse_font, verbosity=args.v)
|
do_generate_fonts(args.template_file, out_basename, pairs, reuse=args.reuse_font, verbosity=args.v)
|
||||||
do_generate_test_html(out_basename, pairs, glyph=args.glyph, verbosity=args.v)
|
do_generate_test_html(out_basename, pairs, glyph=args.glyph, verbosity=args.v)
|
||||||
|
|
||||||
|
|
|
@ -16,6 +16,7 @@
|
||||||
|
|
||||||
"""Create a copy of the emoji images that instantiates aliases, etc. as
|
"""Create a copy of the emoji images that instantiates aliases, etc. as
|
||||||
symlinks."""
|
symlinks."""
|
||||||
|
from __future__ import print_function
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import glob
|
import glob
|
||||||
|
@ -68,10 +69,10 @@ def _alias_people(code_strings, dst):
|
||||||
if src[1:].lower() in code_strings:
|
if src[1:].lower() in code_strings:
|
||||||
src_name = 'emoji_%s.png' % src.lower()
|
src_name = 'emoji_%s.png' % src.lower()
|
||||||
ali_name = 'emoji_u%s.png' % ali.lower()
|
ali_name = 'emoji_u%s.png' % ali.lower()
|
||||||
print 'creating symlink %s -> %s' % (ali_name, src_name)
|
print('creating symlink %s -> %s' % (ali_name, src_name))
|
||||||
os.symlink(path.join(dst, src_name), path.join(dst, ali_name))
|
os.symlink(path.join(dst, src_name), path.join(dst, ali_name))
|
||||||
else:
|
else:
|
||||||
print >> os.stderr, 'people image %s not found' % src
|
print('people image %s not found' % src, file=os.stderr)
|
||||||
|
|
||||||
|
|
||||||
def _alias_flags(code_strings, dst):
|
def _alias_flags(code_strings, dst):
|
||||||
|
@ -80,27 +81,27 @@ def _alias_flags(code_strings, dst):
|
||||||
if src_str in code_strings:
|
if src_str in code_strings:
|
||||||
src_name = 'emoji_u%s.png' % src_str
|
src_name = 'emoji_u%s.png' % src_str
|
||||||
ali_name = 'emoji_u%s.png' % _flag_str(ali)
|
ali_name = 'emoji_u%s.png' % _flag_str(ali)
|
||||||
print 'creating symlink %s (%s) -> %s (%s)' % (ali_name, ali, src_name, src)
|
print('creating symlink %s (%s) -> %s (%s)' % (ali_name, ali, src_name, src))
|
||||||
os.symlink(path.join(dst, src_name), path.join(dst, ali_name))
|
os.symlink(path.join(dst, src_name), path.join(dst, ali_name))
|
||||||
else:
|
else:
|
||||||
print >> os.stderr, 'flag image %s (%s) not found' % (src_name, src)
|
print('flag image %s (%s) not found' % (src_name, src), file=os.stderr)
|
||||||
|
|
||||||
|
|
||||||
def _alias_omitted_flags(code_strings, dst):
|
def _alias_omitted_flags(code_strings, dst):
|
||||||
UNKNOWN_FLAG = 'fe82b'
|
UNKNOWN_FLAG = 'fe82b'
|
||||||
if UNKNOWN_FLAG not in code_strings:
|
if UNKNOWN_FLAG not in code_strings:
|
||||||
print >> os.stderr, 'unknown flag missing'
|
print('unknown flag missing', file=os.stderr)
|
||||||
return
|
return
|
||||||
dst_name = 'emoji_u%s.png' % UNKNOWN_FLAG
|
dst_name = 'emoji_u%s.png' % UNKNOWN_FLAG
|
||||||
dst_path = path.join(dst, dst_name)
|
dst_path = path.join(dst, dst_name)
|
||||||
for ali in sorted(OMITTED_FLAGS):
|
for ali in sorted(OMITTED_FLAGS):
|
||||||
ali_str = _flag_str(ali)
|
ali_str = _flag_str(ali)
|
||||||
if ali_str in code_strings:
|
if ali_str in code_strings:
|
||||||
print >> os.stderr, 'omitted flag %s has image %s' % (ali, ali_str)
|
print('omitted flag %s has image %s' % (ali, ali_str), file=os.stderr)
|
||||||
continue
|
continue
|
||||||
ali_name = 'emoji_u%s.png' % ali_str
|
ali_name = 'emoji_u%s.png' % ali_str
|
||||||
print 'creating symlink %s (%s) -> unknown_flag (%s)' % (
|
print('creating symlink %s (%s) -> unknown_flag (%s)' % (
|
||||||
ali_str, ali, dst_name)
|
ali_str, ali, dst_name))
|
||||||
os.symlink(dst_path, path.join(dst, ali_name))
|
os.symlink(dst_path, path.join(dst, ali_name))
|
||||||
|
|
||||||
|
|
||||||
|
|
Before Width: | Height: | Size: 945 B After Width: | Height: | Size: 2.3 KiB |
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 5.6 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 2.9 KiB |
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 6.0 KiB |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 2.4 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 5.4 KiB |
Before Width: | Height: | Size: 564 B After Width: | Height: | Size: 943 B |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 3.7 KiB |
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 2.2 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 5.0 KiB |
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 2.9 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 5.9 KiB |
Before Width: | Height: | Size: 815 B After Width: | Height: | Size: 1.6 KiB |
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 4.6 KiB |
Before Width: | Height: | Size: 895 B After Width: | Height: | Size: 2.6 KiB |
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 5.5 KiB |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 5.7 KiB |
Before Width: | Height: | Size: 884 B After Width: | Height: | Size: 1.6 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 4.6 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 3.1 KiB |
Before Width: | Height: | Size: 2.4 KiB After Width: | Height: | Size: 6.1 KiB |
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 2.4 KiB After Width: | Height: | Size: 5.7 KiB |
Before Width: | Height: | Size: 3.0 KiB After Width: | Height: | Size: 6.6 KiB |
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 6.1 KiB |
Before Width: | Height: | Size: 3.9 KiB After Width: | Height: | Size: 3.1 KiB |
Before Width: | Height: | Size: 4.9 KiB After Width: | Height: | Size: 5.2 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 3.9 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 3.5 KiB |
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 4.9 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 2.1 KiB After Width: | Height: | Size: 4.4 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 3.8 KiB |
Before Width: | Height: | Size: 2.6 KiB After Width: | Height: | Size: 4.3 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 3.4 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 3.2 KiB |
Before Width: | Height: | Size: 1.9 KiB After Width: | Height: | Size: 4.0 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 4.9 KiB |
Before Width: | Height: | Size: 2.3 KiB After Width: | Height: | Size: 5.2 KiB |
Before Width: | Height: | Size: 3.2 KiB After Width: | Height: | Size: 6.7 KiB |
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 4.2 KiB |
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 5.4 KiB |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 1.7 KiB |
Before Width: | Height: | Size: 2.1 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
Before Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 569 B After Width: | Height: | Size: 514 B |
Before Width: | Height: | Size: 3.0 KiB |
Before Width: | Height: | Size: 530 B After Width: | Height: | Size: 479 B |
Before Width: | Height: | Size: 2.4 KiB |
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 1.8 KiB |
Before Width: | Height: | Size: 6.5 KiB |
Before Width: | Height: | Size: 554 B After Width: | Height: | Size: 494 B |
Before Width: | Height: | Size: 432 B After Width: | Height: | Size: 379 B |
Before Width: | Height: | Size: 2.4 KiB |
Before Width: | Height: | Size: 1.1 KiB After Width: | Height: | Size: 1.1 KiB |
Before Width: | Height: | Size: 3.3 KiB |
Before Width: | Height: | Size: 999 B After Width: | Height: | Size: 942 B |
Before Width: | Height: | Size: 5.3 KiB |
Before Width: | Height: | Size: 502 B After Width: | Height: | Size: 458 B |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 1.5 KiB |
Before Width: | Height: | Size: 1.0 KiB After Width: | Height: | Size: 1013 B |
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 1.8 KiB |
Before Width: | Height: | Size: 1008 B After Width: | Height: | Size: 980 B |
Before Width: | Height: | Size: 2.1 KiB After Width: | Height: | Size: 2.1 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.1 KiB |
Before Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 490 B After Width: | Height: | Size: 432 B |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.1 KiB |
Before Width: | Height: | Size: 6.1 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 1.5 KiB |
Before Width: | Height: | Size: 1.5 KiB After Width: | Height: | Size: 1.6 KiB |
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 1.6 KiB |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 1.2 KiB |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
Before Width: | Height: | Size: 859 B After Width: | Height: | Size: 2.5 KiB |
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 3.5 KiB |
Before Width: | Height: | Size: 2.0 KiB After Width: | Height: | Size: 4.0 KiB |
Before Width: | Height: | Size: 1.8 KiB After Width: | Height: | Size: 3.8 KiB |
Before Width: | Height: | Size: 2.2 KiB After Width: | Height: | Size: 5.3 KiB |
Before Width: | Height: | Size: 1.7 KiB After Width: | Height: | Size: 4.0 KiB |