Handling multi threading (beta)

This recipe explains how learning algorithms in yaplf can be run concurrently exploiting python threads. A good knowledge of the python programming language is required, as well as the comprehension of the basic concepts related to multilayer perceptrons learning algorithms.

A light gray cell denotes one or more python statements, while a subsequent darw gray cell contains the expected output of the above statements. Statements can either be executed in a python or in a sage shell. For sake of visualization this document assumes that statements are executed in a sage notebook, so that graphics are shown right after the cell generating them. The execution in a pure python environment works in the same way, the only difference being that graphic functions return a matplotlib object that can be dealt with as usual.

Running learning algorithms concurrently

yaplf moderately supports multi threading execution of learning algorithms. Indeed, their base class LearningAlgorithm subclasses Thread, so that each learning algorithm can be executed either sequentially or concurrently, respectively invoking run or start.

For instance, the following code executes concurrently four different versions of the error backpropagation algorithm on a same data and subsequently prints the inferred multilayer perceptrons:

from yaplf.data import LabeledExample
from yaplf.algorithms.svm.classification import SVMClassificationAlgorithm
from yaplf.models.kernel import PolynomialKernel
xor_sample = [LabeledExample((0.1, 0.1), -1), LabeledExample((0.1, 0.9), 1),
  LabeledExample((0.9, 0.1), 1), LabeledExample((0.9, 0.9), -1)]
algs = [SVMClassificationAlgorithm(xor_sample, kernel=PolynomialKernel(deg))
  for deg in (2, 5, 10, 15)]
for alg in algs:
  alg.start()

wait = True
while wait:
  wait = False
  for alg in algs:
    if alg.isAlive():
      wait = True

The output of previous cell is omitted for it contains several verbose output of the default SV classification solver. Once the execution is ended, all learning processes converged and it is possible, for instance, to plot the inferred SV classifiers decision function plot:

for alg in algs:
  alg.model.plot((0,1), (0,1), shading=True)
sage-output-0
sage-output-1
sage-output-2
sage-output-3

As pointed out in the title of this recipe, this feature is still to be considered in beta version and its features are limited. For instance it cannot be applied to algorithms such as BackpropagationAlgorithm, whose run function requires the specification of named arguments in order to actually work.