wordfreq 1.4: more words, plus word frequencies from Reddit

The wordfreq module is an easy Python interface for looking up the frequencies of words. It was originally designed for use cases where it was most important to find common words, so it would list all the words that occur at least once per million words: that’s about 30,000 words in English. An advantage of ending the list there is that it loads really fast and takes up a small amount of RAM.

But there’s more to know about word frequencies. There’s a difference between words that are used a bit less than once in a million words, like “almanac”, “crusty”, and “giraffes”, versus words that are used just a few times per billion, such as “centerback”, “polychora”, and “scanlations”. As I’ve started using wordfreq in some aspects of the build process of ConceptNet, I’ve wanted to be able to rank words by frequency even if they’re less common than “giraffes”, and I’m sure other people do too.

So one big change in wordfreq 1.4 is that there is now a ‘large’ wordlist available in the languages that have enough data to support it: English, German, Spanish, French, and Portuguese. These lists contain all words used at least once per 100 million words. The default wordlist is still the smaller, faster one, so you have to ask for the ‘large’ wordlist explicitly — see the documentation.

Including word frequencies from Reddit

The best way to get representative word frequencies is to include a lot of text from a lot of different sources. Now there’s another source available: the Reddit comment corpus.

Reddit is an English-centric site and 99.2% of its comments are in English. We still need to account for the exceptions, such as /r/es, /r/todayilearned_jp, /r/sweden, and of course, the thread named “HELP reddit turned spanish and i cannot undo it!”.

I used pycld2 to detect the language of Reddit comments. In this version, I decided to only use the comments that could be detected as English, because I couldn’t be sure that the data I was getting from other languages was representative enough. For example, unfortunately, most comments in Italian on Reddit are spam, and most comments in Japanese are English speakers trying to learn Japanese. The data that looks the most promising is Spanish, and I might decide to include that in a later version.

So now some Reddit-centric words have claimed a place in the English word list, alongside words from Google Books, Wikipedia, Twitter, television subtitles, and the Leeds Internet Corpus:

>>> wordfreq.zipf_frequency('people', 'en', 'large')
6.23

>>> wordfreq.zipf_frequency('cats', 'en', 'large')
4.42

>>> wordfreq.zipf_frequency('giraffes', 'en', 'large')
3.0

>>> wordfreq.zipf_frequency('narwhals', 'en', 'large')
2.1

>>> wordfreq.zipf_frequency('heffalumps', 'en', 'large')
1.78

>>> wordfreq.zipf_frequency('borogoves', 'en', 'large')
1.16

wordfreq is part of a stack of natural language tools developed at Luminoso and used in ConceptNet. Its data is available under the Creative Commons Attribution-ShareAlike 4.0 license.

Cramming for the test set: We need better ways to evaluate analogies

The publication of word2vec (as “Efficient Estimation of Word Representations in Vector Space” by Mikolov et al.) got a considerable amount of attention by demonstrating that a representation designed to predict words in context could also be used to predict analogies between words. The word2vec authors demonstrated this by including their own corpus of analogies for evaluation. Since then, other representations have been evaluated against that same corpus.

But a word representation that is better at capturing general knowledge of the relationships between things won’t necessarily do better on Mikolov et al.’s evaluation. That evaluation tests numerous examples of only a few types of analogies:

  • Geographical facts, such as “Athens : Greece :: Baghdad : Iraq
  • Gender-swapping analogies, such as “man : woman :: king : queen
  • Names of international currency, such as “Angola : kwanza :: Armenia : dram
  • Morphological relationships, such as “free : freely :: happy : happily
  • Factoids about multi-word named entities, such as “Baltimore : Baltimore Sun :: Cleveland : Cleveland Plain Dealer

The multi-word named entities are usually considered separately. Even word2vec, which this evaluation was designed to evaluate, required a differently-trained vector space to be able to get entities like “Cleveland Plain Dealer” into its vocabulary.

Conceptnet Numberbatch and analogy questions

I’ve been posting about the state-of-the-art set of word embeddings, Conceptnet Numberbatch, and you might wonder how it does on word2vec’s analogies. So even though I’m not a big fan of the word2vec analogy data, I ran a quick evaluation to find out, using Omer Levy’s 3CosMul metric for choosing the best analogies. Here’s how it scored, broken down by the type of question:

  • Geography: 95.6%
  • Gender: 95.8%
  • Currency: 45.5%
  • Morphology: ???
  • Multi-word: 2.2% (most terms are out-of-vocabulary)

Let’s talk about the question marks next to “Morphology”. It doesn’t make sense to ask Numberbatch about morphology. Like most English NLP systems but unlike word2vec, Numberbatch expects morphology to be handled as a separate step. This is a better plan than forgetting everything we know about morphology and hoping the system can rediscover it.

The overwhelming majority of the morphology questions look like “write : writes :: work : works”. Notice that answering this question involves nothing about the meanings of the words “write” and “work”. In fact, the less a system knows about meaning, the less there will be to distract it from its morphological task of adding the letter “s”.

Numberbatch has the same representation for “write” and “writes”, and I think this is reasonable for a system focused on semantics. They have the same meaning, just different morphology. If you want to do morphology, ask a lemmatizer.

So Numberbatch does well on some categories, and it could probably be tuned to do better. But I think this tuning would be counterproductive, because it would reward memorized facts over general knowledge.

Teaching to the test

word2vec’s evaluation was a fine demonstration of the capabilities of word2vec when it was published, but it doesn’t make much sense as a gold standard.

I believe that a system that aces the whole evaluation could be made out of existing tools, and it wouldn’t have very much to do with semantic vectors. Given the analogy A : B :: C : D, it would just look up A and B in Wikipedia and Wiktionary, find connections between them, and return the thing that C is connected to in the same way. Using a pre-parsed version of Wikipedia and Wiktionary would help, and those are things I’ve been working with. You could add in a lemmatizer, but the best lemmatizers are basically condensed versions of Wiktionary anyway.

This would be a silly thing to make. It’s like telling a human student exactly what’s on the test, and letting them bring as many notes as they want. Nothing is left but a test of ability to look things up.

From a machine learning point of view, you might call it “training on the test set”, but I don’t think it’s quite the same thing. There’s no training step involved here. Call it “cramming for the test set” instead. The analogy evaluation is a test of whether your system knows facts and morphology, so knowing facts and morphology is how you succeed at it.

Let’s put this back in perspective, though. The reason the word2vec paper was remarkable is that word2vec wasn’t designed to know facts, or even to be able to make analogies at all. It was designed to predict words in the context of other words, and it happened to be able to make analogies. That was the cool part.

Now that we expect word vectors to be able to form analogies, let’s expect more from our analogies.

English tests for people and computers

Above, I compared a computer running an evaluation to a human learner taking a test. If you want to test whether a human understands analogies, you don’t ask them 10,000 questions about geography. You ask them a lot of different things. So I went looking for analogy tests for people.

I think these kind of analogy “equations” are falling out of favor in education, probably for good reason. They’re artificial and they have a lot to do with test-taking skills. They’re not on the SAT anymore, so if you really want to know whether a high-schooler gets analogies, now you use a separate test called the Miller Analogy Test. I think they’re still pretty reasonable for computers. Computers like equations, and they have mad test-taking skills.

Here are some simple analogies that a semantic representation should be able to make, which I found on a website of resources for English teachers:

  • mouth : eat :: feet : walk
  • awful : bad :: fantastic : good
  • brick : wall :: page : book
  • poor : money :: sad : happiness
  • June : July :: Monday : Tuesday
  • umbrella : rain :: sunscreen : sun

And here are some more difficult ones, from a test-prep book for the Miller Analogy Test:

  • articulate : speech :: coordinated : movement
  • inception : conclusion :: departure : arrival
  • scintillating : dullness :: boisterous : calm
  • elucidate : clarity :: illuminate : light
  • shard : pottery :: splinter : wood
  • attenuate : signal :: dampen : enthusiasm

These examples of analogies from tests also come with multiple-choice distractors, in contrast to the word2vec evaluation, where the vocabulary of all the questions is used as the set of distractors.

Unlike geographical facts, these questions don’t have answers that can simply be looked up. There’s no data set that would name the relationship between “articulate” and “speech” for you in such a way that you can apply the same relationship to “coordinated”. You need a system that can discover a representation of that relationship, and that’s what a good set of semantic vectors can do.

It seems that we can evaluate our semantic systems by giving them tests that were originally designed for people. This approach to semantic evaluation has been used, for example, by Peter Turney, who used SAT questions in “A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations” and related publications.

And now for the big problem: people who write test questions write them under extremely restrictive terms of use. I’d better hope fair use really exists so I can even quote twelve of them here. Turney’s results can no longer be reproduced, through no fault of his, because he is not allowed to distribute his test data.

It would be great if someone who wrote test-prep questions would cooperate with the NLP community and make some of their questions available as an evaluation. I tried e-mailing the website that had the first set of questions on it. I never got a response, and I assume they’re filtering my e-mail as “Strange AI guy” now.

Making it possible to evaluate analogies

There are some great data sets out there about word similarities. MEN-3000, Rare Words, and WordSim-353 are all good examples. They’re in convenient text formats, they’re usually split into development and test sets, and they’re free to redistribute so that your experiments are reproducible.

There should be a way to get analogies up to the same standard. I’ve heard that other people who do this kind of semantics are also looking for a good analogy evaluation. We could get an evaluation corpus the traditional way, with human effort, and divide up the task of making an analogy test for computers among researchers and their students. It wouldn’t be enough for one person or one research group to write all the questions, because they would only write the kinds of questions they expect to be able to handle.

If there were a grant that could fund this, we could more straightforwardly spend money on the problem: we could buy the rights to these test-prep materials from somebody, so that we can convert them into convenient evaluation data, use them, and release them under a Creative Commons license.

Whether their preference is for neural networks, semantic graphs, or logical inferences, many schools of thought on computational semantics agree that analogies are an interesting and relevant task. We should take the opportunity to make our progress on this task measurable and reproducible by obtaining an open, sufficiently general corpus of analogies.

Conceptnet Numberbatch: a new name for the best word embeddings you can download

Recently at Luminoso, we’ve been promoting one of the open-source, open-data products of our research: a set of semantic vectors that we made by combining ConceptNet with other data sources. As I’m launching this new ConceptNet blog, it’s a good time to promote it some more, as it shows why the knowledge in ConceptNet is more important than ever.

Semantic vectors (also known as word embeddings from a deep-learning perspective) let you compare word meanings numerically. Our vectors are measurably better for this than the well-known word2vec vectors (the ones you download from the archived word2vec project page that are trained on Google News), and it’s also measurably better than the GloVe vectors.

To be fair, this system takes word2vec and GloVe as inputs so that it can improve them. One great thing about vector representations is that you can put them together into an ensemble that’s better than its parts.

The name that we gave it when writing a paper about the system is quite a mouthful. The “ConceptNet Vector Ensemble”. I found myself stumbling over the name when giving updates on it at meetings, while trying to get people to not shorten it to “ConceptNet”, which is a much broader project. It’s hard to get this to catch on as an improvement over word2vec if it has such an anti-catchy name.

Last week, Google released an English parsing model named “Parsey McParseface”. Everybody has heard about it. Giving your machine-learning model a silly Internetty name seems to be a great idea.

And that’s why the ConceptNet Vector Ensemble is now named Conceptnet Numberbatch.

It even remains an accurate, descriptive name! I bet Google’s parser doesn’t even have a face.

What does Conceptnet Numberbatch do?

Conceptnet Numberbatch is a set of semantic vectors: it associates words and phrases in a variety of languages with lists of 600 numbers, representing the gist of what they mean.

Some of the information that these vectors represent comes from ConceptNet, a semantic network of knowledge about word meanings. ConceptNet is collected from a combination of expert-created resources, crowdsourcing, and games with a purpose.

If you want to apply machine learning to the meanings of words and sentences, you probably want your system to start out knowing what a lot of words mean. By comparing semantic vectors, you can find search results that are “near misses” that don’t exactly match the search term, you can tell when one sentence is a paraphrase of another sentence, and you can discover the general topics that are being talked about by finding clusters of vectors.

Here’s an example that we can step through. Suppose we want to ask Conceptnet Numberbatch whether Benedict Cumberbatch is more like an actor or an otter. We start by looking up the rows labeled cumberbatchactor, and otter in Numberbatch. This gives us a 600-dimensional unit vector for each of them. Here are all of them graphed component-by-component:

These are pretty hard for us to compare visually, but arrays of numbers are quite easy for computers to work with. The important thing here is that vectors that are similar will point in similar directions (which means they have a high dot product as unit vectors). When we look at them component-by-component here, that means that a vector is similar to another vector when they are positive in the same places and negative in the same places. We can visualize this similarity by multiplying the vectors component-wise:

The cumberbatch * actor plot shows a lot more positive components and fewer negative components than cumberbatch * otter, particularly near the left side. The term cumberbatch is like actor in many ways, and unlike it in very few ways. Adding up the component-wise products, we find that cumberbatch is 0.35 similar to actor on a scale from -1 to 1, and it’s only 0.04 similar to otter.

Another way to understand these vectors is to rank the semantic vectors that are most similar to them. Here are examples for the three vectors we looked at:

otter
/c/en/otter                  1.000000
/c/en/japanese_river_otter   0.993316
/c/en/european_otter         0.988882
/c/en/otterless              0.951721
/c/en/water_mammal           0.938959
/c/en/otterlike              0.872185
/c/en/otterish               0.869584
/c/en/lutrine                0.838774
/c/en/otterskin              0.833183
/c/en/waitoreke              0.694700
/c/en/musteline_mammal       0.680890
/c/en/raccoon_dog            0.608738
actor
/c/en/actor                  1.000001
/c/en/role_player            0.999875
/c/en/star_in_film           0.950550
/c/en/actorial               0.900689
/c/en/actorish               0.866238
/c/en/work_in_theater        0.853726
/c/en/star_in_movie          0.844339
/c/en/stage_actor            0.842363
/c/en/kiruna_stamell         0.813768
/c/en/actress                0.798980
/c/en/method_act             0.777413
/c/en/in_film                0.770334
cumberbatch
/c/en/cumberbatch            1.000000
/c/en/cumbermania            0.871606
/c/en/cumberbabe             0.853023
/c/en/cumberfan              0.837851
/c/en/sherlock               0.379741
/c/en/star_in_film           0.373129
/c/en/actor                  0.367241
/c/en/role_player            0.367171
/c/en/hiddlestoner           0.355940
/c/en/hiddleston             0.346617
/c/en/actorfic               0.344154
/c/en/holmes                 0.337961

We evaluated Numberbatch on several measures of semantic similarity. A system scores highly on these tests when it makes the same judgments about which words are similar to each other that a human would. Across the board, Numberbatch is the system with the most human-like similarity judgments. The code and data that support this are available on GitHub.

How does this fit into ConceptNet in general?

ConceptNet is a semantic network of knowledge about word meanings. Since 2007, long before anyone called these “word embeddings”, we’ve provided vector representations of the terms in ConceptNet that can be compared for similarity. We used to make these by decomposing the link structure of ConceptNet using SVD. Now, a variation on Faruqui et al.’s retrofitting does the job better, and that’s what Numberbatch does.

The current version of Numberbatch, 16.04, uses a transformed version of ConceptNet 5.4. It’s not available through the ConceptNet API — for now, you download Numberbatch separately from its own GitHub page.

ConceptNet 5.5 is going to arrive soon, and a new version of Numberbatch based on that data will be merged into its codebase.

Wait, why did the N become lowercase?

You sure ask the important questions, hypothetical reader. Keeping the N in ConceptNet capitalized would be more consistent, but it’d break the flow. You’d probably read “ConceptNet Numberbatch” in a way that sounds less like a double-dactyl name than “Conceptnet Numberbatch” does.

Capitalize the N if you want. Lowercase all the letters if you want. The orthography of these project names isn’t sacred anyway. ConceptNet itself originated from a project that could be called “OpenMind Commonsense”, “OpenMind CommonSense”, “Open Mind Commonsense”, or various other variations until we let it settle on four normal words, “Open Mind Common Sense”. (OMCS was named in the ‘90s. Give everyone involved a break.)

Please explain the name and why otters are involved

There’s a fine Internet tradition of concocting names that sound very approximately like “Benedict Cumberbatch”, and now we’ve adopted one such name for our research. For more details, you should read A Linguist Explains the Rules of Summoning Benedict Cumberbatch on The Toast. Then, if you manage to come back from there, you should gaze upon Red Scharlach’s Otters Who Look Like Benedict Cumberbatch.

Conceptnet Numberbatch is entirely our own choice of name, and should not indicate affiliation with or endorsement by any person or any otter.

Coincidentally, back in the day, ConceptNet 3 was partly developed on a PowerMac named “otter”.

The particular otter at the top of this post was photographed by Bernard Landgraf, who has taken several excellent nature photos for Wikipedia. The photo is freely available under a Creative Commons Attribution-ShareAlike 3.0 license.

No otters were harmed in the production of this research.

An introduction to the ConceptNet Vector Ensemble

Here’s a big idea that’s taken hold in natural language processing: meanings are vectors. A text-understanding system can represent the approximate meaning of a word or phrase by representing it as a vector in a multi-dimensional space. Vectors that are close to each other represent similar meanings.

A fragment of a concept-cloud visualization of the ConceptNet Vector Ensemble (CNVE). Words that appear close to each other are similar.

Vectors are how Luminoso has always represented meaning. When we started Luminoso, this was seen as a bit of a crazy idea.

It was an exciting time when the idea of vectors as meanings was suddenly popularized by the Google research project word2vec. Now this isn’t considered a crazy idea anymore, it’s considered the effective thing to do.

Luminoso’s starting point — its model of word meanings when it hasn’t seen any of your documents — comes from a vector-based representation of ConceptNet 5. That gives it general knowledge about what words mean. These vectors are then automatically adjusted based on the specific way that words are used in your domain.

But you might well ask: if these newer systems such as word2vec or GloVe are so effective, should we be using them as our starting point?

As the girl in the Old El Paso commercial asks, 'why don't we have both?'

The best representation of word meanings we’ve seen — and we think it’s the best representation of word meanings anyone has seen — is our new ensemble that combines ConceptNet, GloVe, PPDB, and word2vec. It’s described in our paper, “An Ensemble Method to Produce High-Quality Word Embeddings“, and it’s reproducible using this GitHub repository.

We call this the ConceptNet Vector Ensemble. These domain-general word embeddings fill the same niche as, for example, the word2vec Google News vectors, but by several measures, they represent related meanings more like people do.

A comparison of some word-embedding systems on two measures of word relatedness. Our system, CNVE, is the red dot in the upper right.

Expanding on “retrofitting”

Manaal Faruqui’s Retrofitting, from CMU’s Language Technologies Institute, is a very cool idea.

Every system of word vectors is going to reflect the set of data it was trained on, which means there’s probably more information from outside that data that could make it better. If you’ve got a good set of word vectors, but you wish there was more information it had taken into account — particularly a knowledge graph — you can use a fairly straightforward “retrofitting” procedure to adjust the vectors accordingly.

Starting with some vectors and adjusting them based on new information — that sure sounds like what I just described about what Luminoso does, right? Faruqui’s retrofitting is not the particular process we use inside Luminoso’s products, but the general idea is related enough to Luminoso’s proprietary process that working with it was quite natural for us, and we found that it does work well.

There’s one idea from our process that can be added to retrofitting easily: if you have information about words that weren’t in your vocabulary to start with, you should automatically expand your vector space to include them.

Faruqui describes some retrofitting combinations that work well, such as combining GloVe with WordNet. I don’t think anyone had tried doing anything like this with ConceptNet before, and it turns out to be a pretty powerful source of knowledge to add. And when you add this idea of automatically expanding the vocabulary, now you can also represent all the words and phrases in ConceptNet that weren’t in the vocabulary of your original vector space, such as words in other languages.

The multilingual knowledge in ConceptNet is particularly relevant here. Our ensemble can learn more about words based on the things they translate to in languages besides English, and it can represent those words in other languages with the same kind of vectors that it uses to represent English words.

There’s clearly more to be done to extend the full power of this representation to non-English languages. It would be better, for example, if it started with some text in other languages that it could learn from and retrofit onto, instead of relying entirely on the multilingual links in ConceptNet. But it’s promising that the Spanish vectors that our ensemble learns entirely from ConceptNet, starting from having no idea what Spanish is, perform better at word similarity than a system trained on the text of the Spanish Wikipedia.

On the other hand, you have GloVe

For some reason, everyone in this niche talks about word2vec and few people talk about the similar system GloVe, from Stanford NLP. We were more drawn to GloVe as something to experiment with, as we find the way it works clearer than word2vec.

When we compared word2vec and GloVe, we got better initial results from GloVe. Levy et al. report the opposite. I think what this shows is that a whole lot of the performance of these systems is in the fine details of how you use them. And indeed, when we tweak the way we use GloVe — particularly when we borrow a process from ConceptNet to normalize words to their root form — we get word similarities that are much better than word2vec and the original GloVe, even before we retrofit anything onto it.

You can probably guess the next step: “why don’t we use both?” word2vec’s most broadly useful vectors come from Google News articles, while GloVe’s come from reading the Web at large. Those represent different kinds of information. Both of them should be in the system. In the ConceptNet Vector Ensemble, we build a vector space that combines word2vec and GloVe before we start retrofitting.

The data flow of building the ConceptNet Vector Ensemble

You can see that creating state-of-the-art word embeddings involves ideas from a number of different people. A few of them are our own — particularly ConceptNet 5, which is entirely developed at Luminoso these days, and the various ways we transformed word embeddings to make them work better together.

This is an exciting, fast-moving area of NLP. We’re telling everyone about our vectors because the openness of word-embedding research made them possible, and if we kept our own improvement quiet, the field would probably find a way to move on without it at the cost of some unnecessary effort.

These vectors are available for download under a Creative Commons Attribution Share-Alike license. If you’re working on an application that starts from a vector representation of words — maybe you’re working in the still-congealing field of Deep Learning methods for NLP — you should give the ConceptNet Vector Ensemble a try.

wordfreq 1.2 is better at Chinese, English, Greek, Polish, Swedish, and Turkish

In a previous post, we introduced wordfreq, our open-source Python library that lets you ask “how common is this word?”

Wordfreq is an important low-level tool for Luminoso. It’s one of the things we use to figure out which words are important in a set of text data. When we get the word frequencies figured out in a language, that’s a big step toward being able to handle that language from end to end in the Luminoso pipeline. We recently started supporting Arabic in our product, and improved Chinese enough to take the “BETA” tag off of it, and having the right word frequencies for those languages was a big part of it.

I’ve continued to work on wordfreq, putting together more data from more languages. We now have 17 languages that meet the threshold of having three independent sources of word frequencies, which we consider important for those word frequencies to be representative.

Here’s what’s new in wordfreq 1.2:

  • The English word list has gotten a bit more robust and a bit more British by including SUBTLEX, adding word frequencies from American TV shows as well as the BBC.
  • It can fearlessly handle Chinese now. It uses a lovely pure-Python Chinese tokenizer, Jieba, to handle multiple-word phrases, and Jieba’s built-in wordlist provides a third independent source of word frequencies. Wordfreq can even smooth over the differences between Traditional and Simplified Chinese.
  • Greek has also been promoted to a fully-supported language. With new data from Twitter and OpenSubtitles, it now has four independent sources.
  • In some applications, you want to tokenize a complete piece of text, including punctuation as separate tokens. Punctuation tokens don’t get their own word frequencies, but you can ask the tokenizer to give you the punctuation tokens anyway.
  • We added support for Polish, Swedish, and Turkish. All those languages have a reasonable amount of data that we could obtain from OpenSubtitles, Twitter, and Wikipedia by doing what we were doing already.

When adding Turkish, we made sure to convert the case of dotted and dotless İ’s correctly. We know that putting the dots in the wrong places can lead to miscommunication and even fatal stabbings.

The language in wordfreq that’s still only partially supported is Korean. We still only have two sources of data for it, so you’ll see the disproportionate influence of Twitter on its frequencies. If you know where to find a lot of freely-usable Korean subtitles, for example, we would love to know.

Wordfreq 1.2 is available on GitHub and PyPI.