that patch of code could be restructured a bit to make it more readable, here are a couple of small suggestions that jump out at me:
1. perhaps `.reduceLeft(_ + _)` could be replaced with the use of a `sum` function or method (assuming one exists in scala?)
2. if the `topKeywords` collection returned a default value with a `.score` of 0 when queried with a key it doesnt contain, the headOption getOrElse null match null would not be necessary.
e.g. in python it might look something like this:
Keyword = namedtuple('Keyword ', ['score', ...etc...])
top_keywords = defaultdict(lambda : Keyword(score=0, ...etc...))
def sbs(words):
if words:
return (1.0/len(words)) * sum(top_keywords[w].score for w in words)
else:
return 0.0
(apologies for making superficial comments about the code. the algorithm itself certainly seems interesting)
1. perhaps `.reduceLeft(_ + _)` could be replaced with the use of a `sum` function or method (assuming one exists in scala?)
2. if the `topKeywords` collection returned a default value with a `.score` of 0 when queried with a key it doesnt contain, the headOption getOrElse null match null would not be necessary.
e.g. in python it might look something like this:
(apologies for making superficial comments about the code. the algorithm itself certainly seems interesting)