I was coding a Euler problem, and I ran into question that sparked my curiosity. I have two snippets of code. One is with lists the other uses dictionaries.
using lists:
n=100000num=[]suma=0for i in range(n,1,-1): tmp=tuple(set([n for n in factors(i)])) if len(tmp) != 2: continue if tmp not in num: num.append(tmp) suma+=i
using dictionaries:
n=100000num={}suma=0for i in range(n,1,-1): tmp=tuple(set([n for n in factors(i)])) if len(tmp) != 2: continue if tmp not in num: num[tmp]=i suma+=i
I am only concerned about performance. Why does the second example using dictionaries run incredibly fast, faster than the first example with lists. the example with dictionaries runs almost thirty-fold faster!
I tested these 2 code using n=1000000, and the first code run in 1032 seconds and the second one run in just 3.3 second,,, amazin'!