Week 4: A Large Detour

Mar 11, 2019

As I’ve learned this week, research doesn’t always pan out as expected. My week 4 was spent reading up on the second method to be used in my project–the paper itself was interesting and offered yet another unique approach to phrase classification as either metaphorical or literal. The algorithm was devised by Peter Turney, with the help of Yair Neuman, Dan Assaf, and Yohai Cohen. In order to classify methods as metaphorical or literal, Turney and his team examined the concreteness relationships between different words.

 

Turney’s data set was formatted as such: [Adjective] [Noun]

 

All of the words used in Turney’s experiment were rated for their abstractness on a scale of 0-1, with higher values being more abstract and lower values being less abstract. For example, the phrase dark mood is a metaphorical phrase, while by contrast, the phrase bad mood is literal. Within Turney’s experiment, dark has an abstractness rating 0.43356 (relatively concrete), and is a relatively concrete word. Mood has an abstractness rating of 0.61858 (relatively abstract), and bad has an abstractness rating of 0.63326. Turney determined that a phrase was much more likely to be abstract when a concrete term was paired with an abstract term–this is the relationship that I want to model when I code my own method.

As I stated earlier, research doesn’t always turn out the way one expects. As I read through Turney’s paper, I found a problem that may divert the course of my project–this problem lies in the data set of the methods. While the first method analyzes phrases in the format of [Subject] [Verb] [Object], Turney’s method analyzes phrases in the format [Adjective] [Noun]. I can’t force one method to work on the format of the other data set, because hypernym-hyponym relationships wouldn’t hold for the [Adjective] [Noun] format, and the abstractness relationships wouldn’t hold for the [Subject] [Verb] [Object] format. This means that I can’t directly compare the methods against one another as I had intended from the beginning, because they simply work on different things.

While this revelation certainly scared me initially, I’ve since found that this is by no means an insurmountable obstacle–setbacks happen. I simply need to adapt, and to do so, I’ll likely be shifting the focus of my project. Instead of trying to find an optimal method to operate on a specific type of phrase, I’ll be writing multiple methods to handle a larger range of phrase types–the methods, I found, can potentially complement each other–certain methods can address phrases that other methods are unable to address. I’m happy that I was able to identify this problem early on, and I can’t wait to see the new direction my project takes!

2 Replies to “Week 4: A Large Detour”

  1. Abby W. says:

    omg big mood

  2. Anish M. says:

    I’m interested to see how your plans pan out. Do you plan to build a lexical database for abstractness? If not, how do you plan to approach the problem? Although this seems like a detour, I feel that there’s a lot of promise in this specific method. Maybe you could combine findings from the previous blog post and then combine those to get a probability that a statement is metaphorical?

Leave a Reply

Your email address will not be published.