Plans regarding (pure) learning algorithms.
More...
Go to the source code of this file.
Detailed Description
Plans regarding (pure) learning algorithms.
 Todo:
 Connections
 Todo:
 basic_learning

Write a version of basic_learning which uses the most basic conflict analysis:

In some order go through phi and remove assignments which are not needed to produce the conflict.

For that, phi must become a list (with new decision variables appended to the end).

The two most natural orders are the given one and the reverse order.

Since we are always doing a complete restart, it seems best to use the given order, so that we try hardest to remove the first decision variable (in order to get a different new beginning).

We might use a different (stronger) reduction here.

If one would do "intelligent backtracking", then it seems sensible to backtrack to the first decision variable in the given order which is not used in the conflict clause.

The basic statistics is the number of learned clauses.

A variation of the algorithm returns the set of learned clauses.

We should run experiments on heuristics:

The goal is to minimise the number of learned clauses.

A first guess is satisfiabilityorientation.

On the other hand, like for ordinary DLLalgorithms, one can seek to maximise the number of new clauses (for stronger reductions).

This seems sensible, since it (greedily) tries to make the learned clause as short as possible (by having more inferred literals).

In the current framework the heuristics just sees the total clausesets, without knowing which clauses have been learned.

We should first see what can be gained within this restricted framework.

A variation of this algorithm does not stop with the first satisfying assignment found, but continues (and outputs all satisfying (partial) assignments found).

This realises again a CNF > DNF translation, but this time in general not producing a hitting DNF.

The satisfying assignment is phi together with the enforced assignments.

Outputting only phi yields a somewhat reduced representation, but likely this doesn't matter much, since the dominant factor is the number of phi's returned.

Of course, from the produced satisfying assignment one can remove assignments which are not needed for the satisfaction. Removal of inferred variables should be only sensible if before the corresponding decision variable has been removed.

A DNFclauseset G is equivalent to a CNFclauseset F iff every clause of G is a transversal of F and if union(F,comp_cs(G)) is unsatisfiable.

The first condition is easily checkable, and for DLLalgorithms also the second condition is easily checkable, since F can be replaced by a hitting clauseset F' which is easily checked to be implied, and then union(F',G) is an unsatisfiable hitting clauseset.

Now let G be the DNFclauseset obtain by basic_learning; is it now checkable in polytime whether G is equivalent to F?

For example, if F0 (as in the algorithm) is the extension of F by the learned clauses, is then perhaps {} in r(union(F0, comp_cs(G)) ??

We need to test this.
 Todo:
 Intelligent conflictanalysis

Perhaps we make it the responsibility of an augmented reduction to deliver with each derived assignment the decision variables responsible for it.

This is "eager conflictanalysis".

Then from the falsified clause we directly obtain the conflictclause containing only the responsible decision variables.

A "cheaper variant" delivers for the set of derived assignments (by the reduction) only an implicationdag, and then, when the conflict occurs, we need to analyse it further.

This is "lazy conflictanalysis" as used currently in practice.
Definition in file Learning.hpp.