python - Running parallel iterations -


i trying run sort of simulations there fixed parameters need iterate on , find out combinations has least cost.i using python multiprocessing purpose time consumed high.is there wrong implementation?or there better solution.thanks in advance

    import multiprocessing     class iters(object):         #parameters iterations         iters['cwm']={'min':100,'max':130,'step':5}         iters['fx']={'min':1.45,'max':1.45,'step':0.01}         iters['lvt']={'min':106,'max':110,'step':1}         iters['lvw']={'min':9.2,'max':10,'step':0.1}         iters['lvk']={'min':3.3,'max':4.3,'step':0.1}         iters['hvw']={'min':1,'max':2,'step':0.1}         iters['lvh']={'min':6,'max':7,'step':1}          def run_mp(self):             mps=[]             m=multiprocessing.manager()             q=m.list()             cmain=self.iters['cwm']['min']             while(cmain<=self.iters['cwm']['max']):                 t2=multiprocessing.process(target=mp_main,args=(cmain,iters,q))                 mps.append(t2)                 t2.start()                 cmain=cmain+self.iters['cwm']['step']             mp in mps:                 mp.join()             r1=sorted(q,key=lambda x:x['costing'])             returning=[r1[0],r1[1],r1[2],r1[3],r1[4],r1[5],r1[6],r1[7],r1[8],r1[9],r1[10],r1[11],r1[12],r1[13],r1[14],r1[15],r1[16],r1[17],r1[18],r1[19]]             self.counter=len(q)             return returning      def mp_main(cmain,iters,q):         fmain=iters['fx']['min']         while(fmain<=iters['fx']['max']):             lvtmain=iters['lvt']['min']             while (lvtmain<=iters['lvt']['max']):                 lvwmain=iters['lvw']['min']                 while (lvwmain<=iters['lvw']['max']):                     lvkmain=iters['lvk']['min']                     while (lvkmain<=iters['lvk']['max']):                         hvwmain=iters['hvw']['min']                         while (hvwmain<=iters['hvw']['max']):                             lvhmain=iters['lvh']['min']                             while (lvhmain<=iters['lvh']['max']):                                 test={'cmain':cmain,'fmain':fmain,'lvtmain':lvtmain,'lvwmain':lvwmain,'lvkmain':lvkmain,'hvwmain':hvwmain,'lvhmain':lvhmain}                                 y=calculations(test,q)                                 lvhmain=lvhmain+iters['lvh']['step']                             hvwmain=hvwmain+iters['hvw']['step']                         lvkmain=lvkmain+iters['lvk']['step']                     lvwmain=lvwmain+iters['lvw']['step']                 lvtmain=lvtmain+iters['lvt']['step']             fmain=fmain+iters['fx']['step']      def calculations(test,que):         #perform huge number of calculations here         output={}         output['data']=test         output['costing']='foo'         que.append(output)      x=iters()     x.run_thread() 

from theoretical standpoint:

you're iterating every possible combination of 6 different variables. unless search space small, or wanted rough solution, there's no way you'll meaningful results within reasonable time.

i need iterate on , find out combinations has least cost

this sounds optimization problem.

there many different efficient ways of dealing these problems, depending on properties of function you're trying optimize. if has straighforward "shape" (it's injective), can use greedy algorithm such hill climbing, or gradient descent. if it's more complex, can try shotgun hill climbing.

there lot more complex algorithms, these basic, , lot in situation.


from more practical programming standpoint:

you using large steps - large, in fact, you'll probe function 19,200. if want, seems feasible. in fact, if comment y=calculations(test,q), returns instantly on computer.

as indicate, there's "huge number of calculations" there - maybe real problem, , not code you're asking with.

as multiprocessing, honest advise not use until have code executing reasonably fast. unless you're running supercomputing cluster (you're not programming supercomputing cluster in python, you??), parallel processing speedups of 2-4x. that's absolutely negligible, compared gains kind of algorithmic changes mentioned.

as aside, don't think i've ever seen many nested loops in life (excluding code jokes). if don't want switch algorithm, might want consider using itertools.product numpy.arange


Comments

Popular posts from this blog

javascript - Any ideas when Firefox is likely to implement lengthAdjust and textLength? -

matlab - "Contour not rendered for non-finite ZData" -

delphi - Indy UDP Read Contents of Adata -