ImportError: cannot import name 'StanfordCoreNLPParser'Name Entity Linking with Naive Bayes ClassifierHow to detect product name from the bill text?Name Tagger in Stanford NLPGiven paper name get the abstractImport Error: cannot import name 'cv2'Tensorflow regression predicting 1 for all inputsCannot install package: dlxHow to extract name of objects from technical description (NLP)Feature name extraction directly from datasetFuzzy name and nickname match

Why do only some White Walkers shatter into ice chips?

Find the cheapest shipping option based on item weight

Manager is threatening to grade me poorly if I don't complete the project

Nominativ or Akkusativ

I have a unique character that I'm having a problem writing. He's a virus!

Have I damaged my car by attempting to reverse with hand/park brake up?

Homotopy limit over a diagram of nullhomotopic maps

What was Bran's plan to kill the Night King?

Multiple SQL versions with Docker

PN junction band gap - equal across all devices?

29ER Road Tire?

Should homeowners insurance cover the cost of the home?

Could the black hole photo be a gravastar?

How can I get a job without pushing my family's income into a higher tax bracket?

Why is Arya visibly scared in the library in Game of Thrones S8E3?

Didn't attend field-specific conferences during my PhD; how much of a disadvantage is it?

How can I get people to remember my character's gender?

UK Bank Holidays

My rubber roof has developed a leak. Can it be fixed?

How do LIGO and VIRGO know that a gravitational wave has its origin in a neutron star or a black hole?

Introducing Gladys, an intrepid globetrotter

What is the most remote airport from the center of the city it supposedly serves?

Longest ringing/resonating object

Where can I go to avoid planes overhead?



ImportError: cannot import name 'StanfordCoreNLPParser'


Name Entity Linking with Naive Bayes ClassifierHow to detect product name from the bill text?Name Tagger in Stanford NLPGiven paper name get the abstractImport Error: cannot import name 'cv2'Tensorflow regression predicting 1 for all inputsCannot install package: dlxHow to extract name of objects from technical description (NLP)Feature name extraction directly from datasetFuzzy name and nickname match













1












$begingroup$


I've been trying to extract subject-predicate-object triples from sentences and found this awesome API that did just that. However, when it was written, it used the StanfordParser (from nltk.parse import stanford, stanford.StanfordParser), which is now defunct.



Instead, what is to be used now is the StanfordCoreNLPParser. Is this the right way to import it?



from nltk.parse.corenlp import StanfordCoreNLPParser


Anyways, I am trying to modify it rdf_triple.py file (found here) to now use the new StanfordCoreNLPParser, while keeping all the functionality intact.



My code is below:



from nltk.parse import stanford
import os, sys
import operator
from nltk.parse.corenlp import StanfordCoreNLPParser
# java_path = r"C:Program FilesJavajdk1.8.0_31binjava.exe"
# os.environ['JAVAHOME'] = java_path
os.environ['STANFORD_PARSER'] = r'/path/stanford-parser-full-2018-02-27'
os.environ['STANFORD_MODELS'] = r'/path/stanford-parser-full-2018-02-27'


# the RDF function starts here, the problems with the code are
# above this line, I think
class RDF_Triple():

class RDF_SOP():

def __init__(self, name, pos=''):
self.name = name
self.word = ''
self.parent = ''
self.grandparent = ''
self.depth = ''
self.predicate_list = []
self.predicate_sibings = []
self.pos = pos
self.attr = []
self.attr_trees = []


def __init__(self, sentence):
self.sentence = sentence
self.clear_data()


def clear_data(self):
self.parser = nltk.parse.corenlp.StanfordCoreNLPParser(
path_to_jar='/path/stanford-corenlp-3.9.1-models.jar',
path_to_models_jar='/path/stanford-corenlp-3.9.1-models.jar')
# UPDATE THIS PARSER!!!!
self.first_NP = ''
self.first_VP = ''
self.parse_tree = None
self.subject = RDF_Triple.RDF_SOP('subject')
self.predicate = RDF_Triple.RDF_SOP('predicate', 'VB')
self.Object = RDF_Triple.RDF_SOP('object')


def find_NP(self, t):
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label() == 'NP':
if self.first_NP == '':
self.first_NP = t
elif t.label() == 'VP':
if self.first_VP == '':
self.first_VP = t
for child in t:
self.find_NP(child)


def find_subject(self, t, parent=None, grandparent=None):
if self.subject.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN':
if self.subject.word == '':
self.subject.word = t.leaves()[0]
self.subject.pos = t.label()
self.subject.parent = parent
self.subject.grandparent = grandparent
else:
for child in t:
self.find_subject(child, parent=t, grandparent=parent)


def find_predicate(self, t, parent=None, grandparent=None, depth=0):
try:
t.label()
except AttributeError:
pass
else:
if t.label()[:2] == 'VB':
self.predicate.predicate_list.append((t.leaves()[0], depth, parent, grandparent))

for child in t:
self.find_predicate(child, parent=t, grandparent=parent, depth=depth+1)


def find_deepest_predicate(self):
if not self.predicate.predicate_list:
return '','','',''
return max(self.predicate.predicate_list, key=operator.itemgetter(1))


def extract_word_and_pos(self, t, depth=0, words=[]):
try:
t.label()
except AttributeError:
# print t
# print 'error', t
pass
else:
# Now we know that t.node is defined
if t.height() == 2:
# self.word_pos_holder.append((t.label(), t.leaves()[0]))
words.append((t.leaves()[0], t.label()))
for child in t:
self.extract_word_and_pos(child, depth+1, words)
return words



def print_tree(self, t, depth=0):
try:
t.label()
except AttributeError:
print(t)
# print 'error', t
pass
else:
# Now we know that t.node is defined
print('(')#, t.label(), t.leaves()[0]
for child in t:
self.print_tree(child, depth+1)
print(') ')


def find_object(self):
for t in self.predicate.parent:
if self.Object.word == '':
self.find_object_NP_PP(t, t.label(), self.predicate.parent, self.predicate.grandparent)


def find_object_NP_PP(self, t, phrase_type, parent=None, grandparent=None):
'''
finds the object given its a NP or PP or ADJP
'''
if self.Object.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN' and phrase_type in ['NP', 'PP']:
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
elif t.label()[:2] == 'JJ' and phrase_type == 'ADJP':
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
else:
for child in t:
self.find_object_NP_PP(child, phrase_type, parent=t, grandparent=parent)


def get_attributes(self, pos, sibling_tree, grandparent):
rdf_type_attr = []
if pos[:2] == 'JJ':
for item in sibling_tree:
if item.label()[:2] == 'RB':
rdf_type_attr.append((item.leaves()[0], item.label()))
else:
if pos[:2] == 'NN':
for item in sibling_tree:
if item.label()[:2] in ['DT', 'PR', 'PO', 'JJ', 'CD']:
rdf_type_attr.append((item.leaves()[0], item.label()))
if item.label() in ['QP', 'NP']:
#append a tree
rdf_type_attr.append(item, item.label())
elif pos[:2] == 'VB':
for item in sibling_tree:
if item.label()[:2] == 'AD':
rdf_type_attr.append((item, item.label()))

if grandparent:
if pos[:2] in ['NN', 'JJ']:
for uncle in grandparent:
if uncle.label() == 'PP':
rdf_type_attr.append((uncle, uncle.label()))
elif pos[:2] == 'VB':
for uncle in grandparent:
if uncle.label()[:2] == 'VB':
rdf_type_attr.append((uncle, uncle.label()))


return self.attr_to_words(rdf_type_attr)


def attr_to_words(self, attr):
new_attr_words = []
new_attr_trees = []
for tup in attr:
if type(tup[0]) != str:
if tup[0].height() == 2:
new_attr_words.append((tup[0].leaves()[0], tup[0].label()))
else:
# new_attr_words.extend(self.extract_word_and_pos(tup[0]))
new_attr_trees.append(tup[0].unicode_repr())
else:
new_attr_words.append(tup)
return new_attr_words, new_attr_trees

def jsonify_rdf(self):
return 'sentence':self.sentence,
'parse_tree':self.parse_tree.unicode_repr(),
'predicate':'word':self.predicate.word, 'POS':self.predicate.pos,
'Word Attributes':self.predicate.attr, 'Tree Attributes':self.predicate.attr_trees,
'subject':'word':self.subject.word, 'POS':self.subject.pos,
'Word Attributes':self.subject.attr, 'Tree Attributes':self.subject.attr_trees,
'object':'word':self.Object.word, 'POS':self.Object.pos,
'Word Attributes':self.Object.attr, 'Tree Attributes':self.Object.attr_trees,
'rdf':[self.subject.word, self.predicate.word, self.Object.word]



def main(self):
self.clear_data()
self.parse_tree = self.parser.raw_parse(self.sentence)[0]
self.find_NP(self.parse_tree)
self.find_subject(self.first_NP)
self.find_predicate(self.first_VP)
if self.subject.word == '' and self.first_NP != '':
self.subject.word = self.first_NP.leaves()[0]
self.predicate.word, self.predicate.depth, self.predicate.parent, self.predicate.grandparent = self.find_deepest_predicate()
self.find_object()
self.subject.attr, self.subject.attr_trees = self.get_attributes(self.subject.pos, self.subject.parent, self.subject.grandparent)
self.predicate.attr, self.predicate.attr_trees = self.get_attributes(self.predicate.pos, self.predicate.parent, self.predicate.grandparent)
self.Object.attr, self.Object.attr_trees = self.get_attributes(self.Object.pos, self.Object.parent, self.Object.grandparent)
self.answer = self.jsonify_rdf()


# =============================================================================
# if __name__ == '__main__':
# try:
# sentence = sys.argv[1]
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# except IndexError:
# print("Enter in your sentence")
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# print("Heres an example")
# print(sentence)
#
# # sentence = 'The boy dunked the basketball'
# sentence = 'They also made the substance able to last longer in the bloodstream, which led to more stable blood sugar levels and less frequent injections.'
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# rdf = RDF_Triple(sentence)
# rdf.main()
#
# ans = rdf.answer
# print(ans)
# =============================================================================


What happens when I run my code is I get the error:




ImportError: cannot import name 'StanfordCoreNLPParser'.




Does anyone have an idea how to fix this?










share|improve this question











$endgroup$











  • $begingroup$
    datascience.stackexchange.com/help/merging-accounts
    $endgroup$
    – Stephen Rauch
    Aug 10 '18 at 21:53















1












$begingroup$


I've been trying to extract subject-predicate-object triples from sentences and found this awesome API that did just that. However, when it was written, it used the StanfordParser (from nltk.parse import stanford, stanford.StanfordParser), which is now defunct.



Instead, what is to be used now is the StanfordCoreNLPParser. Is this the right way to import it?



from nltk.parse.corenlp import StanfordCoreNLPParser


Anyways, I am trying to modify it rdf_triple.py file (found here) to now use the new StanfordCoreNLPParser, while keeping all the functionality intact.



My code is below:



from nltk.parse import stanford
import os, sys
import operator
from nltk.parse.corenlp import StanfordCoreNLPParser
# java_path = r"C:Program FilesJavajdk1.8.0_31binjava.exe"
# os.environ['JAVAHOME'] = java_path
os.environ['STANFORD_PARSER'] = r'/path/stanford-parser-full-2018-02-27'
os.environ['STANFORD_MODELS'] = r'/path/stanford-parser-full-2018-02-27'


# the RDF function starts here, the problems with the code are
# above this line, I think
class RDF_Triple():

class RDF_SOP():

def __init__(self, name, pos=''):
self.name = name
self.word = ''
self.parent = ''
self.grandparent = ''
self.depth = ''
self.predicate_list = []
self.predicate_sibings = []
self.pos = pos
self.attr = []
self.attr_trees = []


def __init__(self, sentence):
self.sentence = sentence
self.clear_data()


def clear_data(self):
self.parser = nltk.parse.corenlp.StanfordCoreNLPParser(
path_to_jar='/path/stanford-corenlp-3.9.1-models.jar',
path_to_models_jar='/path/stanford-corenlp-3.9.1-models.jar')
# UPDATE THIS PARSER!!!!
self.first_NP = ''
self.first_VP = ''
self.parse_tree = None
self.subject = RDF_Triple.RDF_SOP('subject')
self.predicate = RDF_Triple.RDF_SOP('predicate', 'VB')
self.Object = RDF_Triple.RDF_SOP('object')


def find_NP(self, t):
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label() == 'NP':
if self.first_NP == '':
self.first_NP = t
elif t.label() == 'VP':
if self.first_VP == '':
self.first_VP = t
for child in t:
self.find_NP(child)


def find_subject(self, t, parent=None, grandparent=None):
if self.subject.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN':
if self.subject.word == '':
self.subject.word = t.leaves()[0]
self.subject.pos = t.label()
self.subject.parent = parent
self.subject.grandparent = grandparent
else:
for child in t:
self.find_subject(child, parent=t, grandparent=parent)


def find_predicate(self, t, parent=None, grandparent=None, depth=0):
try:
t.label()
except AttributeError:
pass
else:
if t.label()[:2] == 'VB':
self.predicate.predicate_list.append((t.leaves()[0], depth, parent, grandparent))

for child in t:
self.find_predicate(child, parent=t, grandparent=parent, depth=depth+1)


def find_deepest_predicate(self):
if not self.predicate.predicate_list:
return '','','',''
return max(self.predicate.predicate_list, key=operator.itemgetter(1))


def extract_word_and_pos(self, t, depth=0, words=[]):
try:
t.label()
except AttributeError:
# print t
# print 'error', t
pass
else:
# Now we know that t.node is defined
if t.height() == 2:
# self.word_pos_holder.append((t.label(), t.leaves()[0]))
words.append((t.leaves()[0], t.label()))
for child in t:
self.extract_word_and_pos(child, depth+1, words)
return words



def print_tree(self, t, depth=0):
try:
t.label()
except AttributeError:
print(t)
# print 'error', t
pass
else:
# Now we know that t.node is defined
print('(')#, t.label(), t.leaves()[0]
for child in t:
self.print_tree(child, depth+1)
print(') ')


def find_object(self):
for t in self.predicate.parent:
if self.Object.word == '':
self.find_object_NP_PP(t, t.label(), self.predicate.parent, self.predicate.grandparent)


def find_object_NP_PP(self, t, phrase_type, parent=None, grandparent=None):
'''
finds the object given its a NP or PP or ADJP
'''
if self.Object.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN' and phrase_type in ['NP', 'PP']:
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
elif t.label()[:2] == 'JJ' and phrase_type == 'ADJP':
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
else:
for child in t:
self.find_object_NP_PP(child, phrase_type, parent=t, grandparent=parent)


def get_attributes(self, pos, sibling_tree, grandparent):
rdf_type_attr = []
if pos[:2] == 'JJ':
for item in sibling_tree:
if item.label()[:2] == 'RB':
rdf_type_attr.append((item.leaves()[0], item.label()))
else:
if pos[:2] == 'NN':
for item in sibling_tree:
if item.label()[:2] in ['DT', 'PR', 'PO', 'JJ', 'CD']:
rdf_type_attr.append((item.leaves()[0], item.label()))
if item.label() in ['QP', 'NP']:
#append a tree
rdf_type_attr.append(item, item.label())
elif pos[:2] == 'VB':
for item in sibling_tree:
if item.label()[:2] == 'AD':
rdf_type_attr.append((item, item.label()))

if grandparent:
if pos[:2] in ['NN', 'JJ']:
for uncle in grandparent:
if uncle.label() == 'PP':
rdf_type_attr.append((uncle, uncle.label()))
elif pos[:2] == 'VB':
for uncle in grandparent:
if uncle.label()[:2] == 'VB':
rdf_type_attr.append((uncle, uncle.label()))


return self.attr_to_words(rdf_type_attr)


def attr_to_words(self, attr):
new_attr_words = []
new_attr_trees = []
for tup in attr:
if type(tup[0]) != str:
if tup[0].height() == 2:
new_attr_words.append((tup[0].leaves()[0], tup[0].label()))
else:
# new_attr_words.extend(self.extract_word_and_pos(tup[0]))
new_attr_trees.append(tup[0].unicode_repr())
else:
new_attr_words.append(tup)
return new_attr_words, new_attr_trees

def jsonify_rdf(self):
return 'sentence':self.sentence,
'parse_tree':self.parse_tree.unicode_repr(),
'predicate':'word':self.predicate.word, 'POS':self.predicate.pos,
'Word Attributes':self.predicate.attr, 'Tree Attributes':self.predicate.attr_trees,
'subject':'word':self.subject.word, 'POS':self.subject.pos,
'Word Attributes':self.subject.attr, 'Tree Attributes':self.subject.attr_trees,
'object':'word':self.Object.word, 'POS':self.Object.pos,
'Word Attributes':self.Object.attr, 'Tree Attributes':self.Object.attr_trees,
'rdf':[self.subject.word, self.predicate.word, self.Object.word]



def main(self):
self.clear_data()
self.parse_tree = self.parser.raw_parse(self.sentence)[0]
self.find_NP(self.parse_tree)
self.find_subject(self.first_NP)
self.find_predicate(self.first_VP)
if self.subject.word == '' and self.first_NP != '':
self.subject.word = self.first_NP.leaves()[0]
self.predicate.word, self.predicate.depth, self.predicate.parent, self.predicate.grandparent = self.find_deepest_predicate()
self.find_object()
self.subject.attr, self.subject.attr_trees = self.get_attributes(self.subject.pos, self.subject.parent, self.subject.grandparent)
self.predicate.attr, self.predicate.attr_trees = self.get_attributes(self.predicate.pos, self.predicate.parent, self.predicate.grandparent)
self.Object.attr, self.Object.attr_trees = self.get_attributes(self.Object.pos, self.Object.parent, self.Object.grandparent)
self.answer = self.jsonify_rdf()


# =============================================================================
# if __name__ == '__main__':
# try:
# sentence = sys.argv[1]
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# except IndexError:
# print("Enter in your sentence")
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# print("Heres an example")
# print(sentence)
#
# # sentence = 'The boy dunked the basketball'
# sentence = 'They also made the substance able to last longer in the bloodstream, which led to more stable blood sugar levels and less frequent injections.'
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# rdf = RDF_Triple(sentence)
# rdf.main()
#
# ans = rdf.answer
# print(ans)
# =============================================================================


What happens when I run my code is I get the error:




ImportError: cannot import name 'StanfordCoreNLPParser'.




Does anyone have an idea how to fix this?










share|improve this question











$endgroup$











  • $begingroup$
    datascience.stackexchange.com/help/merging-accounts
    $endgroup$
    – Stephen Rauch
    Aug 10 '18 at 21:53













1












1








1





$begingroup$


I've been trying to extract subject-predicate-object triples from sentences and found this awesome API that did just that. However, when it was written, it used the StanfordParser (from nltk.parse import stanford, stanford.StanfordParser), which is now defunct.



Instead, what is to be used now is the StanfordCoreNLPParser. Is this the right way to import it?



from nltk.parse.corenlp import StanfordCoreNLPParser


Anyways, I am trying to modify it rdf_triple.py file (found here) to now use the new StanfordCoreNLPParser, while keeping all the functionality intact.



My code is below:



from nltk.parse import stanford
import os, sys
import operator
from nltk.parse.corenlp import StanfordCoreNLPParser
# java_path = r"C:Program FilesJavajdk1.8.0_31binjava.exe"
# os.environ['JAVAHOME'] = java_path
os.environ['STANFORD_PARSER'] = r'/path/stanford-parser-full-2018-02-27'
os.environ['STANFORD_MODELS'] = r'/path/stanford-parser-full-2018-02-27'


# the RDF function starts here, the problems with the code are
# above this line, I think
class RDF_Triple():

class RDF_SOP():

def __init__(self, name, pos=''):
self.name = name
self.word = ''
self.parent = ''
self.grandparent = ''
self.depth = ''
self.predicate_list = []
self.predicate_sibings = []
self.pos = pos
self.attr = []
self.attr_trees = []


def __init__(self, sentence):
self.sentence = sentence
self.clear_data()


def clear_data(self):
self.parser = nltk.parse.corenlp.StanfordCoreNLPParser(
path_to_jar='/path/stanford-corenlp-3.9.1-models.jar',
path_to_models_jar='/path/stanford-corenlp-3.9.1-models.jar')
# UPDATE THIS PARSER!!!!
self.first_NP = ''
self.first_VP = ''
self.parse_tree = None
self.subject = RDF_Triple.RDF_SOP('subject')
self.predicate = RDF_Triple.RDF_SOP('predicate', 'VB')
self.Object = RDF_Triple.RDF_SOP('object')


def find_NP(self, t):
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label() == 'NP':
if self.first_NP == '':
self.first_NP = t
elif t.label() == 'VP':
if self.first_VP == '':
self.first_VP = t
for child in t:
self.find_NP(child)


def find_subject(self, t, parent=None, grandparent=None):
if self.subject.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN':
if self.subject.word == '':
self.subject.word = t.leaves()[0]
self.subject.pos = t.label()
self.subject.parent = parent
self.subject.grandparent = grandparent
else:
for child in t:
self.find_subject(child, parent=t, grandparent=parent)


def find_predicate(self, t, parent=None, grandparent=None, depth=0):
try:
t.label()
except AttributeError:
pass
else:
if t.label()[:2] == 'VB':
self.predicate.predicate_list.append((t.leaves()[0], depth, parent, grandparent))

for child in t:
self.find_predicate(child, parent=t, grandparent=parent, depth=depth+1)


def find_deepest_predicate(self):
if not self.predicate.predicate_list:
return '','','',''
return max(self.predicate.predicate_list, key=operator.itemgetter(1))


def extract_word_and_pos(self, t, depth=0, words=[]):
try:
t.label()
except AttributeError:
# print t
# print 'error', t
pass
else:
# Now we know that t.node is defined
if t.height() == 2:
# self.word_pos_holder.append((t.label(), t.leaves()[0]))
words.append((t.leaves()[0], t.label()))
for child in t:
self.extract_word_and_pos(child, depth+1, words)
return words



def print_tree(self, t, depth=0):
try:
t.label()
except AttributeError:
print(t)
# print 'error', t
pass
else:
# Now we know that t.node is defined
print('(')#, t.label(), t.leaves()[0]
for child in t:
self.print_tree(child, depth+1)
print(') ')


def find_object(self):
for t in self.predicate.parent:
if self.Object.word == '':
self.find_object_NP_PP(t, t.label(), self.predicate.parent, self.predicate.grandparent)


def find_object_NP_PP(self, t, phrase_type, parent=None, grandparent=None):
'''
finds the object given its a NP or PP or ADJP
'''
if self.Object.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN' and phrase_type in ['NP', 'PP']:
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
elif t.label()[:2] == 'JJ' and phrase_type == 'ADJP':
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
else:
for child in t:
self.find_object_NP_PP(child, phrase_type, parent=t, grandparent=parent)


def get_attributes(self, pos, sibling_tree, grandparent):
rdf_type_attr = []
if pos[:2] == 'JJ':
for item in sibling_tree:
if item.label()[:2] == 'RB':
rdf_type_attr.append((item.leaves()[0], item.label()))
else:
if pos[:2] == 'NN':
for item in sibling_tree:
if item.label()[:2] in ['DT', 'PR', 'PO', 'JJ', 'CD']:
rdf_type_attr.append((item.leaves()[0], item.label()))
if item.label() in ['QP', 'NP']:
#append a tree
rdf_type_attr.append(item, item.label())
elif pos[:2] == 'VB':
for item in sibling_tree:
if item.label()[:2] == 'AD':
rdf_type_attr.append((item, item.label()))

if grandparent:
if pos[:2] in ['NN', 'JJ']:
for uncle in grandparent:
if uncle.label() == 'PP':
rdf_type_attr.append((uncle, uncle.label()))
elif pos[:2] == 'VB':
for uncle in grandparent:
if uncle.label()[:2] == 'VB':
rdf_type_attr.append((uncle, uncle.label()))


return self.attr_to_words(rdf_type_attr)


def attr_to_words(self, attr):
new_attr_words = []
new_attr_trees = []
for tup in attr:
if type(tup[0]) != str:
if tup[0].height() == 2:
new_attr_words.append((tup[0].leaves()[0], tup[0].label()))
else:
# new_attr_words.extend(self.extract_word_and_pos(tup[0]))
new_attr_trees.append(tup[0].unicode_repr())
else:
new_attr_words.append(tup)
return new_attr_words, new_attr_trees

def jsonify_rdf(self):
return 'sentence':self.sentence,
'parse_tree':self.parse_tree.unicode_repr(),
'predicate':'word':self.predicate.word, 'POS':self.predicate.pos,
'Word Attributes':self.predicate.attr, 'Tree Attributes':self.predicate.attr_trees,
'subject':'word':self.subject.word, 'POS':self.subject.pos,
'Word Attributes':self.subject.attr, 'Tree Attributes':self.subject.attr_trees,
'object':'word':self.Object.word, 'POS':self.Object.pos,
'Word Attributes':self.Object.attr, 'Tree Attributes':self.Object.attr_trees,
'rdf':[self.subject.word, self.predicate.word, self.Object.word]



def main(self):
self.clear_data()
self.parse_tree = self.parser.raw_parse(self.sentence)[0]
self.find_NP(self.parse_tree)
self.find_subject(self.first_NP)
self.find_predicate(self.first_VP)
if self.subject.word == '' and self.first_NP != '':
self.subject.word = self.first_NP.leaves()[0]
self.predicate.word, self.predicate.depth, self.predicate.parent, self.predicate.grandparent = self.find_deepest_predicate()
self.find_object()
self.subject.attr, self.subject.attr_trees = self.get_attributes(self.subject.pos, self.subject.parent, self.subject.grandparent)
self.predicate.attr, self.predicate.attr_trees = self.get_attributes(self.predicate.pos, self.predicate.parent, self.predicate.grandparent)
self.Object.attr, self.Object.attr_trees = self.get_attributes(self.Object.pos, self.Object.parent, self.Object.grandparent)
self.answer = self.jsonify_rdf()


# =============================================================================
# if __name__ == '__main__':
# try:
# sentence = sys.argv[1]
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# except IndexError:
# print("Enter in your sentence")
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# print("Heres an example")
# print(sentence)
#
# # sentence = 'The boy dunked the basketball'
# sentence = 'They also made the substance able to last longer in the bloodstream, which led to more stable blood sugar levels and less frequent injections.'
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# rdf = RDF_Triple(sentence)
# rdf.main()
#
# ans = rdf.answer
# print(ans)
# =============================================================================


What happens when I run my code is I get the error:




ImportError: cannot import name 'StanfordCoreNLPParser'.




Does anyone have an idea how to fix this?










share|improve this question











$endgroup$




I've been trying to extract subject-predicate-object triples from sentences and found this awesome API that did just that. However, when it was written, it used the StanfordParser (from nltk.parse import stanford, stanford.StanfordParser), which is now defunct.



Instead, what is to be used now is the StanfordCoreNLPParser. Is this the right way to import it?



from nltk.parse.corenlp import StanfordCoreNLPParser


Anyways, I am trying to modify it rdf_triple.py file (found here) to now use the new StanfordCoreNLPParser, while keeping all the functionality intact.



My code is below:



from nltk.parse import stanford
import os, sys
import operator
from nltk.parse.corenlp import StanfordCoreNLPParser
# java_path = r"C:Program FilesJavajdk1.8.0_31binjava.exe"
# os.environ['JAVAHOME'] = java_path
os.environ['STANFORD_PARSER'] = r'/path/stanford-parser-full-2018-02-27'
os.environ['STANFORD_MODELS'] = r'/path/stanford-parser-full-2018-02-27'


# the RDF function starts here, the problems with the code are
# above this line, I think
class RDF_Triple():

class RDF_SOP():

def __init__(self, name, pos=''):
self.name = name
self.word = ''
self.parent = ''
self.grandparent = ''
self.depth = ''
self.predicate_list = []
self.predicate_sibings = []
self.pos = pos
self.attr = []
self.attr_trees = []


def __init__(self, sentence):
self.sentence = sentence
self.clear_data()


def clear_data(self):
self.parser = nltk.parse.corenlp.StanfordCoreNLPParser(
path_to_jar='/path/stanford-corenlp-3.9.1-models.jar',
path_to_models_jar='/path/stanford-corenlp-3.9.1-models.jar')
# UPDATE THIS PARSER!!!!
self.first_NP = ''
self.first_VP = ''
self.parse_tree = None
self.subject = RDF_Triple.RDF_SOP('subject')
self.predicate = RDF_Triple.RDF_SOP('predicate', 'VB')
self.Object = RDF_Triple.RDF_SOP('object')


def find_NP(self, t):
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label() == 'NP':
if self.first_NP == '':
self.first_NP = t
elif t.label() == 'VP':
if self.first_VP == '':
self.first_VP = t
for child in t:
self.find_NP(child)


def find_subject(self, t, parent=None, grandparent=None):
if self.subject.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN':
if self.subject.word == '':
self.subject.word = t.leaves()[0]
self.subject.pos = t.label()
self.subject.parent = parent
self.subject.grandparent = grandparent
else:
for child in t:
self.find_subject(child, parent=t, grandparent=parent)


def find_predicate(self, t, parent=None, grandparent=None, depth=0):
try:
t.label()
except AttributeError:
pass
else:
if t.label()[:2] == 'VB':
self.predicate.predicate_list.append((t.leaves()[0], depth, parent, grandparent))

for child in t:
self.find_predicate(child, parent=t, grandparent=parent, depth=depth+1)


def find_deepest_predicate(self):
if not self.predicate.predicate_list:
return '','','',''
return max(self.predicate.predicate_list, key=operator.itemgetter(1))


def extract_word_and_pos(self, t, depth=0, words=[]):
try:
t.label()
except AttributeError:
# print t
# print 'error', t
pass
else:
# Now we know that t.node is defined
if t.height() == 2:
# self.word_pos_holder.append((t.label(), t.leaves()[0]))
words.append((t.leaves()[0], t.label()))
for child in t:
self.extract_word_and_pos(child, depth+1, words)
return words



def print_tree(self, t, depth=0):
try:
t.label()
except AttributeError:
print(t)
# print 'error', t
pass
else:
# Now we know that t.node is defined
print('(')#, t.label(), t.leaves()[0]
for child in t:
self.print_tree(child, depth+1)
print(') ')


def find_object(self):
for t in self.predicate.parent:
if self.Object.word == '':
self.find_object_NP_PP(t, t.label(), self.predicate.parent, self.predicate.grandparent)


def find_object_NP_PP(self, t, phrase_type, parent=None, grandparent=None):
'''
finds the object given its a NP or PP or ADJP
'''
if self.Object.word != '':
return
try:
t.label()
except AttributeError:
pass
else:
# Now we know that t.node is defined
if t.label()[:2] == 'NN' and phrase_type in ['NP', 'PP']:
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
elif t.label()[:2] == 'JJ' and phrase_type == 'ADJP':
if self.Object.word == '':
self.Object.word = t.leaves()[0]
self.Object.pos = t.label()
self.Object.parent = parent
self.Object.grandparent = grandparent
else:
for child in t:
self.find_object_NP_PP(child, phrase_type, parent=t, grandparent=parent)


def get_attributes(self, pos, sibling_tree, grandparent):
rdf_type_attr = []
if pos[:2] == 'JJ':
for item in sibling_tree:
if item.label()[:2] == 'RB':
rdf_type_attr.append((item.leaves()[0], item.label()))
else:
if pos[:2] == 'NN':
for item in sibling_tree:
if item.label()[:2] in ['DT', 'PR', 'PO', 'JJ', 'CD']:
rdf_type_attr.append((item.leaves()[0], item.label()))
if item.label() in ['QP', 'NP']:
#append a tree
rdf_type_attr.append(item, item.label())
elif pos[:2] == 'VB':
for item in sibling_tree:
if item.label()[:2] == 'AD':
rdf_type_attr.append((item, item.label()))

if grandparent:
if pos[:2] in ['NN', 'JJ']:
for uncle in grandparent:
if uncle.label() == 'PP':
rdf_type_attr.append((uncle, uncle.label()))
elif pos[:2] == 'VB':
for uncle in grandparent:
if uncle.label()[:2] == 'VB':
rdf_type_attr.append((uncle, uncle.label()))


return self.attr_to_words(rdf_type_attr)


def attr_to_words(self, attr):
new_attr_words = []
new_attr_trees = []
for tup in attr:
if type(tup[0]) != str:
if tup[0].height() == 2:
new_attr_words.append((tup[0].leaves()[0], tup[0].label()))
else:
# new_attr_words.extend(self.extract_word_and_pos(tup[0]))
new_attr_trees.append(tup[0].unicode_repr())
else:
new_attr_words.append(tup)
return new_attr_words, new_attr_trees

def jsonify_rdf(self):
return 'sentence':self.sentence,
'parse_tree':self.parse_tree.unicode_repr(),
'predicate':'word':self.predicate.word, 'POS':self.predicate.pos,
'Word Attributes':self.predicate.attr, 'Tree Attributes':self.predicate.attr_trees,
'subject':'word':self.subject.word, 'POS':self.subject.pos,
'Word Attributes':self.subject.attr, 'Tree Attributes':self.subject.attr_trees,
'object':'word':self.Object.word, 'POS':self.Object.pos,
'Word Attributes':self.Object.attr, 'Tree Attributes':self.Object.attr_trees,
'rdf':[self.subject.word, self.predicate.word, self.Object.word]



def main(self):
self.clear_data()
self.parse_tree = self.parser.raw_parse(self.sentence)[0]
self.find_NP(self.parse_tree)
self.find_subject(self.first_NP)
self.find_predicate(self.first_VP)
if self.subject.word == '' and self.first_NP != '':
self.subject.word = self.first_NP.leaves()[0]
self.predicate.word, self.predicate.depth, self.predicate.parent, self.predicate.grandparent = self.find_deepest_predicate()
self.find_object()
self.subject.attr, self.subject.attr_trees = self.get_attributes(self.subject.pos, self.subject.parent, self.subject.grandparent)
self.predicate.attr, self.predicate.attr_trees = self.get_attributes(self.predicate.pos, self.predicate.parent, self.predicate.grandparent)
self.Object.attr, self.Object.attr_trees = self.get_attributes(self.Object.pos, self.Object.parent, self.Object.grandparent)
self.answer = self.jsonify_rdf()


# =============================================================================
# if __name__ == '__main__':
# try:
# sentence = sys.argv[1]
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# except IndexError:
# print("Enter in your sentence")
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# print("Heres an example")
# print(sentence)
#
# # sentence = 'The boy dunked the basketball'
# sentence = 'They also made the substance able to last longer in the bloodstream, which led to more stable blood sugar levels and less frequent injections.'
# sentence = 'A rare black squirrel has become a regular visitor to a suburban garden'
# rdf = RDF_Triple(sentence)
# rdf.main()
#
# ans = rdf.answer
# print(ans)
# =============================================================================


What happens when I run my code is I get the error:




ImportError: cannot import name 'StanfordCoreNLPParser'.




Does anyone have an idea how to fix this?







machine-learning python nlp stanford-nlp






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 10 '18 at 21:55









Stephen Rauch

1,53551330




1,53551330










asked Aug 10 '18 at 20:07









mathdeviantmathdeviant

61




61











  • $begingroup$
    datascience.stackexchange.com/help/merging-accounts
    $endgroup$
    – Stephen Rauch
    Aug 10 '18 at 21:53
















  • $begingroup$
    datascience.stackexchange.com/help/merging-accounts
    $endgroup$
    – Stephen Rauch
    Aug 10 '18 at 21:53















$begingroup$
datascience.stackexchange.com/help/merging-accounts
$endgroup$
– Stephen Rauch
Aug 10 '18 at 21:53




$begingroup$
datascience.stackexchange.com/help/merging-accounts
$endgroup$
– Stephen Rauch
Aug 10 '18 at 21:53










1 Answer
1






active

oldest

votes


















1












$begingroup$

I do not know of anything called StanfordCoreNLPParser. The stanfordcorenlp package has StanfordCoreNLP or StanfordParser:



from stanfordcorenlp import StanfordCoreNLP, StanfordParser





share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f36764%2fimporterror-cannot-import-name-stanfordcorenlpparser%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    I do not know of anything called StanfordCoreNLPParser. The stanfordcorenlp package has StanfordCoreNLP or StanfordParser:



    from stanfordcorenlp import StanfordCoreNLP, StanfordParser





    share|improve this answer









    $endgroup$

















      1












      $begingroup$

      I do not know of anything called StanfordCoreNLPParser. The stanfordcorenlp package has StanfordCoreNLP or StanfordParser:



      from stanfordcorenlp import StanfordCoreNLP, StanfordParser





      share|improve this answer









      $endgroup$















        1












        1








        1





        $begingroup$

        I do not know of anything called StanfordCoreNLPParser. The stanfordcorenlp package has StanfordCoreNLP or StanfordParser:



        from stanfordcorenlp import StanfordCoreNLP, StanfordParser





        share|improve this answer









        $endgroup$



        I do not know of anything called StanfordCoreNLPParser. The stanfordcorenlp package has StanfordCoreNLP or StanfordParser:



        from stanfordcorenlp import StanfordCoreNLP, StanfordParser






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 12 '18 at 18:13









        Brian SpieringBrian Spiering

        4,3631129




        4,3631129



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f36764%2fimporterror-cannot-import-name-stanfordcorenlpparser%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

            Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High