# Earley’s algorithm in C++

## Introduction

Earley’s algorithm is a dynamic programming algorithm for generalised parsing, i.e., parsing with grammars that might be ambiguous. For example, the Earley algorithm could parse sentences generated by the following toy English grammar, which I am going to use in the explanation and code:

S -> NP VP
NP -> NP PP
NP -> Noun
VP -> Verb NP
VP -> VP PP
PP -> Prep NP
Noun -> John
Noun -> Mary
Noun -> Denver
Verb -> called
Prep -> from


As an example of the ambiguity of this grammar, consider the sentence “John called Mary from Denver”. Was John calling from Denver, or was Mary from Denver?

## Data structures

The Earley algorithm uses three data structures, states, statelists, and the chart.

### States

A state represents a state in the process of evaluating a grammar rule. In particular, it represents how far along the right hand side of the rule the parse has progressed. This is usually represented as a dot between symbols of the right hand side showing the position, hence a state is sometimes known as a “dotted rule”.

For example, the state below is for the grammar rule “PP -> Prep NP” from the grammar above:

(PP -> Prep @ NP, [3 , 4])


The position of the dot (represented by a “@”) indicates that the “Prep” has been processed and the “NP” should be next. The “3” and “4” are the positions within the input that the processed part of this state refers to (i.e., the span of the “Prep”).

### Statelists

A statelist is an ordered list of states. The states in a statelist are unique, so it is sometimes called a state set.
Once a state has been added to a statelist, it is never removed or modified – instead new states are created in this or the next statelist based on the state by the procedures described below.

### The chart

The chart is a set of statelists. There is one statelist for every point in the input, from before the first word to after the last. The statelists are generated one at a time as the parse progresses. There are only two statelists in action at any time, the current one being processed and the next one being generated.

## The algorithm

The algorithm processes the chart from the first statelist to the last, constructing the statelists ahead of itself as it goes.
Rather than looping over the input, it loops over the statelists, the processing of which adds new statelists ahead of the iteration. The processing of states within the statelists consumes the input.

There are three procedures, the Predictor, the Scanner, and the Completer.

### The Predictor

The Predictor processes states in which the dot is before a non-terminal. For example, in the state below,

(S -> @ NP VP, [0 , 0])


the dot is before the non-terminal “NP”, so the Predictor would process this state.
The Predictor inserts into the current statelist a state for each rule where that symbol after the dot (“NP”) is on the left (“NP -> NP PP” and “NP -> Noun”), and places the dot before the first symbol of these new rules’ right-hand sides. So the state above gives rise to the following two states:

(NP -> @ NP PP, [0 , 0])
(NP -> @ Noun, [0 , 0])


### The Scanner

The Scanner identifies states in the statelist where the dot is before a terminal (in other words, those states where the dot is to the left a symbol not processed by the Predictor). It reads the next word of input, and inserts a state into the next statelist containing the same rule with the dot moved past the terminal.
For example, consider this state in which the Predictor has predicted a “Noun”, whose right hand side is a terminal (Noun -> John):

(Noun -> @ John, [0 , 0])


This causes the Scanner to read the next word of input, which, if it is “John”, results in the following state being added to the next statelist:

(Noun -> John @, [0 , 1])


### The Completer

The Completer processes those states not processed by the Predictor or Scanner, in other words, those in which the dot is at the right-hand end, after every symbol in the rule. These states are now complete. It finds states that have the left-hand symbol of those completed states on their right-hand side, and inserts states that are identical, except that the dot is moved to the other side of that particular symbol, as the completion of the original state represents the completion of this symbol. So the Completer’s job is to drive the dots forward in rules that are being processed.
Continuing the example above, as soon as the Scanner has added the state for “Noun -> John”, that state has the dot on the extreme right, so the Completer adds a state marking the completion of the “Noun” part of the rule “NP -> Noun”:

(NP -> Noun @, [0 , 1])


## Summary

The Predictor and Completer insert states into the current statelist. The Scanner inserts states into the next statelist (because it advances the reading position). The Predictor and Completer are duals of each other and book-end the work of the Scanner. The Predictor moves down the grammar hierarchy to the bottom in order to introduce states with lower level rules. It feeds the Scanner by working its way down to the terminals and inserting states for their rules. The Completer moves up the grammar hierarchy to the top to advance the dots. It is propagating the work of the Scanner upwards, all the way to the start symbol. The parse is successful if the start state is in the last statelist, with the dot on the right of its right hand side.

## The code

Below is the code in C++. The class earley_parser is instantiated with a grammar (see Context-free grammar in C++), and its parse() method attempts to recognise a sentence, and prints the statelists if it is passed a std::ostream. The Predictor, Scanner, and Completer are implemented as methods with those names.

#ifndef EARLEY_HPP
#define EARLEY_HPP

#include <string>
#include <vector>
#include <list>
#include <iostream>
#include <iterator>
#include <algorithm>

#include <grammar.hpp>

namespace MB
{

namespace detail
{

// An element of a statelist, also known as a "dotted rule" or "Earley item"
struct state
{
MB::rule::const_ptr rule_; // The grammar rule
const unsigned int right_; // Index of right hand side alternative
unsigned int dot_; // Position of dot within symbols on right hand side
unsigned int i_, j_; // Positions within the input
char added_by_; // Which function added this state
state(MB::rule::const_ptr rule, unsigned int right, unsigned int i, unsigned int j)
: rule_(rule), right_(right), dot_(0), i_(i), j_(j), added_by_(0)
{
}

// Is dot all the way to the right?
bool completed() const
{
return dot_ == rule_->right()[right_].size();
}

// Get symbol to the right of dot
std::string next_symbol() const
{
return rule_->right()[right_][dot_];
}
};

// Pretty-print state
std::ostream& operator <<(std::ostream& os, const state& st)
{
const std::vector<std::string>& right = st.rule_->right()[st.right_];
size_t rlen = right.size();
os << '(';
os << st.rule_->left() << " -> ";
unsigned int s;
for (s = 0; s < rlen; ++s) {
if (s == st.dot_) {
os << "@ ";
}
os << right[s];
if (s < rlen - 1) {
os << ' ';
}
}
if (s == st.dot_) {
os << " @";
}
os << ", [" << st.i_ << " , " << st.j_ << "]) ";
case 'P':
os << "predictor";
break;
case 'S':
os << "scanner";
break;
case 'C':
os << "completer";
break;
default:
os << "start state";
}
return os;
}

// Needed to check for duplicate states
bool operator ==(const state& state1, const state& state2)
{
return state1.rule_->left() == state2.rule_->left()
&& state1.rule_->right() == state2.rule_->right()
&& state1.right_ == state2.right_
&& state1.dot_ == state2.dot_
&& state1.i_ == state2.i_
&& state1.j_ == state2.j_;
}

// A statelist is a list of states
typedef std::list<state> statelist;

// A chart is a vector of statelists
struct chart
{
const MB::grammar& grammar_;
std::vector<statelist> chart_;

chart(const MB::grammar& grammar)
: grammar_(grammar),
// Chart begins with 1 statelist containing 1 dummy state used to track
// successful parse
chart_(1, statelist(1, state(MB::rule::create("$", grammar.get_start_left()), 0, 0, 0))) { } // Add state st to statelist s void add_state(state& st, unsigned int s) { if (s < chart_.size()) { // Adding to the last statelist statelist::iterator it = std::find(chart_[s].begin(), chart_[s].end(), st); if (it == chart_[s].end()) { chart_[s].push_back(st); } } else { // Adding to a new statelist chart_.push_back(statelist(1, st)); } } // Add predictions for the next symbol in this state void predictor(state& st) { std::vector<MB::rule::const_ptr> rules; grammar_.get_rules_for_left(st.next_symbol(), std::back_inserter(rules)); for (MB::rule::const_ptr r : rules) { for (unsigned int a = 0; a < r->right().size(); ++a) { state prediction = state(r, a, st.j_, st.j_); prediction.added_by_ = 'P'; add_state(prediction, st.j_); } } } // Scan input for next symbol void scanner(const state& st, const std::vector<std::string>& input) { const std::string& word = input[st.j_]; if (word == st.rule_->right()[st.right_][st.dot_]) { state scanned = state(st.rule_, st.right_, st.i_, st.j_ + 1); scanned.dot_ = st.dot_ + 1; scanned.added_by_ = 'S'; add_state(scanned, st.j_ + 1); } } // Complete states void completer(const state& st) { const statelist& list = chart_[st.i_]; const unsigned int i = st.i_; const unsigned int j = st.j_; for (const state& candidate : list) { if (candidate.j_ == i && !candidate.completed() && candidate.next_symbol() == st.rule_->left()) { state completed = state(candidate.rule_, candidate.right_, candidate.i_, j); completed.dot_ = candidate.dot_ + 1; completed.added_by_ = 'C'; add_state(completed, j); } } } // We have succeeded if the completed dummy state is in the final statelist bool succeeded() const { const statelist& list = chart_[chart_.size() - 1]; return std::find_if(list.begin(), list.end(), [](const state &st)->bool { return st.rule_->left() == "$" && st.completed(); })
!= list.end();
}

// The main algorithm
bool parse(const std::vector<std::string>& input, std::ostream *os)
{
for (unsigned int i = 0; i <= input.size(); ++i) {
if (chart_.size() > i) { // Check for running out of statelists when parse fails
for (statelist::iterator it = chart_[i].begin(); it != chart_[i].end(); ++it) {
state& st = *it;
if (!st.completed()
&& !grammar_.symbol_is_terminal(st.next_symbol())) {
predictor(st);
}
else if (!st.completed()) {
if (i < input.size()) {
scanner(st, input);
}
}
else {
completer(st);
}
}
if (os) {
*os << *this;
*os << '\n';
}
}
}
return succeeded();
}

// Pretty-print chart
friend std::ostream& operator <<(std::ostream& os, const chart &ch)
{
for (unsigned int i = 0; i < ch.chart_.size(); ++i) {
os << "S" << i << ": ";
os << '[';
unsigned int j = 0;
for (const state& st : ch.chart_[i]) {
os << st;
if (j < ch.chart_[i].size() - 1) {
os << ",\n";
}
++j;
}
os << ']';
os << "\n\n";
}
return os;
}
};

} // namespace detail

class earley_parser
{
public:
earley_parser(const MB::grammar& grammar)
: grammar_(grammar)
{
}
template <class InputIt>
bool parse(InputIt begin, InputIt end) const
{
std::vector<std::string> input;
std::copy(begin, end, std::back_inserter(input));
return detail::chart(grammar_).parse(input, nullptr);
}
template <class InputIt>
bool parse(InputIt begin, InputIt end, std::ostream& os) const
{
std::vector<std::string> input;
std::copy(begin, end, std::back_inserter(input));
return detail::chart(grammar_).parse(input, &os);
}
private:
const MB::grammar& grammar_;
};

} // namespace MB

#endif // EARLEY_HPP


Here is an example program that attempts to recognise the ambiguous sentence “John called Mary from Denver” mentioned in the introduction:

#include <fstream>
#include <iostream>
#include <string>

#include <grammar.hpp>
#include <earley.hpp>

int main()
{
std::string filename = "earley.dat";
std::ifstream ifs(filename);
if (ifs) {
MB::grammar grammar(ifs);
std::string sentence[] = {"John", "called", "Mary", "from", "Denver"};
const size_t len = sizeof(sentence) / sizeof(sentence[0]);
bool success = MB::earley_parser(grammar).parse(sentence, sentence + len, std::cout);
std::cout << "Success: " << std::boolalpha << success << '\n';
}
else {
std::cerr << "Couldn't open " << filename << " for reading\n";
}
}


Here is the output. The entire chart is printed for each iteration. Notice how each iteration adds another statelist. The parsing process is a process of driving the dots forward. At the end of the parse, the presence of the completed start rule in the last statelist indicates success.

S0: [($-> @ S, [0 , 0]) start state, (S -> @ NP VP, [0 , 0]) predictor, (NP -> @ NP PP, [0 , 0]) predictor, (NP -> @ Noun, [0 , 0]) predictor, (Noun -> @ John, [0 , 0]) predictor, (Noun -> @ Mary, [0 , 0]) predictor, (Noun -> @ Denver, [0 , 0]) predictor] S1: [(Noun -> John @, [0 , 1]) scanner] S0: [($ -> @ S, [0 , 0]) start state,
(S -> @ NP VP, [0 , 0]) predictor,
(NP -> @ NP PP, [0 , 0]) predictor,
(NP -> @ Noun, [0 , 0]) predictor,
(Noun -> @ John, [0 , 0]) predictor,
(Noun -> @ Mary, [0 , 0]) predictor,
(Noun -> @ Denver, [0 , 0]) predictor]

S1: [(Noun -> John @, [0 , 1]) scanner,
(NP -> Noun @, [0 , 1]) completer,
(S -> NP @ VP, [0 , 1]) completer,
(NP -> NP @ PP, [0 , 1]) completer,
(VP -> @ Verb NP, [1 , 1]) predictor,
(VP -> @ VP PP, [1 , 1]) predictor,
(PP -> @ Prep NP, [1 , 1]) predictor,
(Verb -> @ called, [1 , 1]) predictor,
(Prep -> @ from, [1 , 1]) predictor]

S2: [(Verb -> called @, [1 , 2]) scanner]

S0: [($-> @ S, [0 , 0]) start state, (S -> @ NP VP, [0 , 0]) predictor, (NP -> @ NP PP, [0 , 0]) predictor, (NP -> @ Noun, [0 , 0]) predictor, (Noun -> @ John, [0 , 0]) predictor, (Noun -> @ Mary, [0 , 0]) predictor, (Noun -> @ Denver, [0 , 0]) predictor] S1: [(Noun -> John @, [0 , 1]) scanner, (NP -> Noun @, [0 , 1]) completer, (S -> NP @ VP, [0 , 1]) completer, (NP -> NP @ PP, [0 , 1]) completer, (VP -> @ Verb NP, [1 , 1]) predictor, (VP -> @ VP PP, [1 , 1]) predictor, (PP -> @ Prep NP, [1 , 1]) predictor, (Verb -> @ called, [1 , 1]) predictor, (Prep -> @ from, [1 , 1]) predictor] S2: [(Verb -> called @, [1 , 2]) scanner, (VP -> Verb @ NP, [1 , 2]) completer, (NP -> @ NP PP, [2 , 2]) predictor, (NP -> @ Noun, [2 , 2]) predictor, (Noun -> @ John, [2 , 2]) predictor, (Noun -> @ Mary, [2 , 2]) predictor, (Noun -> @ Denver, [2 , 2]) predictor] S3: [(Noun -> Mary @, [2 , 3]) scanner] S0: [($ -> @ S, [0 , 0]) start state,
(S -> @ NP VP, [0 , 0]) predictor,
(NP -> @ NP PP, [0 , 0]) predictor,
(NP -> @ Noun, [0 , 0]) predictor,
(Noun -> @ John, [0 , 0]) predictor,
(Noun -> @ Mary, [0 , 0]) predictor,
(Noun -> @ Denver, [0 , 0]) predictor]

S1: [(Noun -> John @, [0 , 1]) scanner,
(NP -> Noun @, [0 , 1]) completer,
(S -> NP @ VP, [0 , 1]) completer,
(NP -> NP @ PP, [0 , 1]) completer,
(VP -> @ Verb NP, [1 , 1]) predictor,
(VP -> @ VP PP, [1 , 1]) predictor,
(PP -> @ Prep NP, [1 , 1]) predictor,
(Verb -> @ called, [1 , 1]) predictor,
(Prep -> @ from, [1 , 1]) predictor]

S2: [(Verb -> called @, [1 , 2]) scanner,
(VP -> Verb @ NP, [1 , 2]) completer,
(NP -> @ NP PP, [2 , 2]) predictor,
(NP -> @ Noun, [2 , 2]) predictor,
(Noun -> @ John, [2 , 2]) predictor,
(Noun -> @ Mary, [2 , 2]) predictor,
(Noun -> @ Denver, [2 , 2]) predictor]

S3: [(Noun -> Mary @, [2 , 3]) scanner,
(NP -> Noun @, [2 , 3]) completer,
(VP -> Verb NP @, [1 , 3]) completer,
(NP -> NP @ PP, [2 , 3]) completer,
(S -> NP VP @, [0 , 3]) completer,
(VP -> VP @ PP, [1 , 3]) completer,
(PP -> @ Prep NP, [3 , 3]) predictor,
($-> S @, [0 , 3]) completer, (Prep -> @ from, [3 , 3]) predictor] S4: [(Prep -> from @, [3 , 4]) scanner] S0: [($ -> @ S, [0 , 0]) start state,
(S -> @ NP VP, [0 , 0]) predictor,
(NP -> @ NP PP, [0 , 0]) predictor,
(NP -> @ Noun, [0 , 0]) predictor,
(Noun -> @ John, [0 , 0]) predictor,
(Noun -> @ Mary, [0 , 0]) predictor,
(Noun -> @ Denver, [0 , 0]) predictor]

S1: [(Noun -> John @, [0 , 1]) scanner,
(NP -> Noun @, [0 , 1]) completer,
(S -> NP @ VP, [0 , 1]) completer,
(NP -> NP @ PP, [0 , 1]) completer,
(VP -> @ Verb NP, [1 , 1]) predictor,
(VP -> @ VP PP, [1 , 1]) predictor,
(PP -> @ Prep NP, [1 , 1]) predictor,
(Verb -> @ called, [1 , 1]) predictor,
(Prep -> @ from, [1 , 1]) predictor]

S2: [(Verb -> called @, [1 , 2]) scanner,
(VP -> Verb @ NP, [1 , 2]) completer,
(NP -> @ NP PP, [2 , 2]) predictor,
(NP -> @ Noun, [2 , 2]) predictor,
(Noun -> @ John, [2 , 2]) predictor,
(Noun -> @ Mary, [2 , 2]) predictor,
(Noun -> @ Denver, [2 , 2]) predictor]

S3: [(Noun -> Mary @, [2 , 3]) scanner,
(NP -> Noun @, [2 , 3]) completer,
(VP -> Verb NP @, [1 , 3]) completer,
(NP -> NP @ PP, [2 , 3]) completer,
(S -> NP VP @, [0 , 3]) completer,
(VP -> VP @ PP, [1 , 3]) completer,
(PP -> @ Prep NP, [3 , 3]) predictor,
($-> S @, [0 , 3]) completer, (Prep -> @ from, [3 , 3]) predictor] S4: [(Prep -> from @, [3 , 4]) scanner, (PP -> Prep @ NP, [3 , 4]) completer, (NP -> @ NP PP, [4 , 4]) predictor, (NP -> @ Noun, [4 , 4]) predictor, (Noun -> @ John, [4 , 4]) predictor, (Noun -> @ Mary, [4 , 4]) predictor, (Noun -> @ Denver, [4 , 4]) predictor] S5: [(Noun -> Denver @, [4 , 5]) scanner] S0: [($ -> @ S, [0 , 0]) start state,
(S -> @ NP VP, [0 , 0]) predictor,
(NP -> @ NP PP, [0 , 0]) predictor,
(NP -> @ Noun, [0 , 0]) predictor,
(Noun -> @ John, [0 , 0]) predictor,
(Noun -> @ Mary, [0 , 0]) predictor,
(Noun -> @ Denver, [0 , 0]) predictor]

S1: [(Noun -> John @, [0 , 1]) scanner,
(NP -> Noun @, [0 , 1]) completer,
(S -> NP @ VP, [0 , 1]) completer,
(NP -> NP @ PP, [0 , 1]) completer,
(VP -> @ Verb NP, [1 , 1]) predictor,
(VP -> @ VP PP, [1 , 1]) predictor,
(PP -> @ Prep NP, [1 , 1]) predictor,
(Verb -> @ called, [1 , 1]) predictor,
(Prep -> @ from, [1 , 1]) predictor]

S2: [(Verb -> called @, [1 , 2]) scanner,
(VP -> Verb @ NP, [1 , 2]) completer,
(NP -> @ NP PP, [2 , 2]) predictor,
(NP -> @ Noun, [2 , 2]) predictor,
(Noun -> @ John, [2 , 2]) predictor,
(Noun -> @ Mary, [2 , 2]) predictor,
(Noun -> @ Denver, [2 , 2]) predictor]

S3: [(Noun -> Mary @, [2 , 3]) scanner,
(NP -> Noun @, [2 , 3]) completer,
(VP -> Verb NP @, [1 , 3]) completer,
(NP -> NP @ PP, [2 , 3]) completer,
(S -> NP VP @, [0 , 3]) completer,
(VP -> VP @ PP, [1 , 3]) completer,
(PP -> @ Prep NP, [3 , 3]) predictor,
($-> S @, [0 , 3]) completer, (Prep -> @ from, [3 , 3]) predictor] S4: [(Prep -> from @, [3 , 4]) scanner, (PP -> Prep @ NP, [3 , 4]) completer, (NP -> @ NP PP, [4 , 4]) predictor, (NP -> @ Noun, [4 , 4]) predictor, (Noun -> @ John, [4 , 4]) predictor, (Noun -> @ Mary, [4 , 4]) predictor, (Noun -> @ Denver, [4 , 4]) predictor] S5: [(Noun -> Denver @, [4 , 5]) scanner, (NP -> Noun @, [4 , 5]) completer, (PP -> Prep NP @, [3 , 5]) completer, (NP -> NP @ PP, [4 , 5]) completer, (NP -> NP PP @, [2 , 5]) completer, (VP -> VP PP @, [1 , 5]) completer, (PP -> @ Prep NP, [5 , 5]) predictor, (VP -> Verb NP @, [1 , 5]) completer, (NP -> NP @ PP, [2 , 5]) completer, (S -> NP VP @, [0 , 5]) completer, (VP -> VP @ PP, [1 , 5]) completer, (Prep -> @ from, [5 , 5]) predictor, ($ -> S @, [0 , 5]) completer]

Success: true


# Levenshtein distance in C

The Levenshtein distance is a measure of the similarity of two strings. It is defined as the minimum number of insertions, deletions, and substitutions necessary to transform the first string into the second.

For example, the Levenshtein distance between “CHALK” and “CHEESE” is 4, as follows:

1. Substitute E for A
2. Substitute E for L
3. Substitute S for K
4. Insert E

The Levenshtein distance can be calculated using a dynamic programming algorithm due to Wagner and Fischer. The algorithm fills a table with the distance between all of the prefixes of the two strings, the final entry being the distance between the two entire strings.

Below is an implementation in C. I have recorded in the matrix cells information about the edits and the previous cell from which each cell was derived to enable tracing back the sequence of edits – the edit script. The function levenshtein_distance() takes two strings, and the address of a pointer to which to assign an array containing the edit script. It returns the Levenshtein distance.

#include <string.h>
#include <stdlib.h>

typedef enum {
INSERTION,
DELETION,
SUBSTITUTION,
NONE
} edit_type;

struct edit {
unsigned int score;
edit_type type;
char arg1;
char arg2;
unsigned int pos;
struct edit *prev;
};
typedef struct edit edit;

static int min3(int a, int b, int c)
{
if (a < b && a < c) {
return a;
}
if (b < a && b < c) {
return b;
}
return c;
}

static unsigned int levenshtein_matrix_calculate(edit **mat, const char *str1, size_t len1,
const char *str2, size_t len2)
{
unsigned int i, j;
for (j = 1; j <= len2; j++) {
for (i = 1; i <= len1; i++) {
unsigned int substitution_cost;
unsigned int del = 0, ins = 0, subst = 0;
unsigned int best;
if (str1[i - 1] == str2[j - 1]) {
substitution_cost = 0;
}
else {
substitution_cost = 1;
}
del = mat[i - 1][j].score + 1; /* deletion */
ins = mat[i][j - 1].score + 1; /* insertion */
subst = mat[i - 1][j - 1].score + substitution_cost; /* substitution */
best = min3(del, ins, subst);
mat[i][j].score = best;
mat[i][j].arg1 = str1[i - 1];
mat[i][j].arg2 = str2[j - 1];
mat[i][j].pos = i - 1;
if (best == del) {
mat[i][j].type = DELETION;
mat[i][j].prev = &mat[i - 1][j];
}
else if (best == ins) {
mat[i][j].type = INSERTION;
mat[i][j].prev = &mat[i][j - 1];
}
else {
if (substitution_cost > 0) {
mat[i][j].type = SUBSTITUTION;
}
else {
mat[i][j].type = NONE;
}
mat[i][j].prev = &mat[i - 1][j - 1];
}
}
}
return mat[len1][len2].score;
}

static edit **levenshtein_matrix_create(const char *str1, size_t len1, const char *str2,
size_t len2)
{
unsigned int i, j;
edit **mat = malloc((len1 + 1) * sizeof(edit *));
if (mat == NULL) {
return NULL;
}
for (i = 0; i <= len1; i++) {
mat[i] = malloc((len2 + 1) * sizeof(edit));
if (mat[i] == NULL) {
for (j = 0; j < i; j++) {
free(mat[j]);
}
free(mat);
return NULL;
}
}
for (i = 0; i <= len1; i++) {
mat[i][0].score = i;
mat[i][0].prev = NULL;
mat[i][0].arg1 = 0;
mat[i][0].arg2 = 0;
}

for (j = 0; j <= len2; j++) {
mat[0][j].score = j;
mat[0][j].prev = NULL;
mat[0][j].arg1 = 0;
mat[0][j].arg2 = 0;
}
return mat;
}

unsigned int levenshtein_distance(const char *str1, const char *str2, edit **script)
{
const size_t len1 = strlen(str1), len2 = strlen(str2);
unsigned int i, distance;

/* If either string is empty, the distance is the other string's length */
if (len1 == 0) {
return len2;
}
if (len2 == 0) {
return len1;
}
/* Initialise the matrix */
mat = levenshtein_matrix_create(str1, len1, str2, len2);
if (!mat) {
*script = NULL;
return 0;
}
/* Main algorithm */
distance = levenshtein_matrix_calculate(mat, str1, len1, str2, len2);
/* Read back the edit script */
*script = malloc(distance * sizeof(edit));
if (*script) {
i = distance - 1;
for (head = &mat[len1][len2];
if (head->type != NONE) {
memcpy(*script + i, head, sizeof(edit));
i--;
}
}
}
else {
distance = 0;
}
/* Clean up */
for (i = 0; i <= len1; i++) {
free(mat[i]);
}
free(mat);

return distance;


Here is an example program that finds the distance from CHALK to CHEESE and prints the edit script:

#include <stdlib.h>
#include <stdio.h>

void print(const edit *e)
{
if (e->type == INSERTION) {
printf("Insert %c", e->arg2);
}
else if (e->type == DELETION) {
printf("Delete %c", e->arg1);
}
else {
printf("Substitute %c for %c", e->arg2, e->arg1);
}
printf(" at %u\n", e->pos);
}

int main(void)
{
edit *script;
unsigned int distance;
unsigned int i;
const char str1[] = "CHALK";
const char str2[] = "CHEESE";
distance = levenshtein_distance(str1, str2, &script);
printf("Distance is %d:\n", distance);
for (i = 0; i < distance; i++) {
print(&script[i]);
}
free(script);
return 0;
}


The output:

Distance is 4:
Substitute E for A at 2
Substitute E for L at 3
Substitute S for K at 4
Insert E at 4


# CYK algorithm in C++

The Cocke-Younger-Kasami (CYK) algorithm is a dynamic programming algorithm for generalised parsing – i.e., parsing with ambiguous grammars where there may be more than one possible parse. The algorithm works by populating a table with the possible ways of parsing successively larger portions of the input. If the grammar’s start symbol is in the top cell of the table when it has been completed, the parse is successful.

Below is an implementation in C++. The class MB::cyk_parser is instantiated with a grammar (see Context-free grammar in C++). The parse() method attempts to recognise a sentence and displays the parsing table if it is called with a std::ostream.

The grammar must be in Chomsky Normal Form (CNF), which means that every left-hand side must be a single non-terminal symbol, and every right-hand side alternative must either be a single terminal symbol or a pair of non-terminal symbols. This is a requirement of the classic CYK algorithm, but some variants of the algorithm can handle grammars that are not in CNF.

The parsing table is an upper-right triangular matrix internally, but is printed as a lower-left triangular matrix, as this is easier to read.

#ifndef CYK_HPP
#define CYK_HPP

#include <vector>
#include <set>
#include <iostream>
#include <iterator>

#include <grammar.hpp>

namespace MB
{

class cyk_parser
{
public:
cyk_parser(const MB::grammar& grammar)
: grammar_(grammar)
{
}

template <class InputIt>
bool parse(InputIt begin, InputIt end)
{
return parse(begin, end, nullptr);
}

template <class InputIt>
bool parse(InputIt begin, InputIt end, std::ostream& os)
{
return parse(begin, end, &os);
}

private:
template <class InputIt>
bool parse(InputIt begin, InputIt end, std::ostream *os)
{
std::vector<std::string> str;
std::copy(begin, end, std::back_inserter(str));
const size_t len = str.size();
std::vector<std::vector<std::set<std::string> > > table(len,
std::vector<std::set<std::string> >(len));
unsigned int i, j, k;

// Populate main diagonal
i = 0;
for (const std::string& c : str) {
std::vector<MB::rule::const_ptr> rules;
grammar_.get_rules_for_symbol(c, std::back_inserter(rules));
for (MB::rule::const_ptr r : rules) {
table[i][i].insert(r->left());
}
i++;
}
// Populate upper-right triangle
for (i = 1; i < len; i++) {
for (j = i; j < len; j++) {
for (k = j - i; k < j; k++) {
std::vector<std::pair<std::string, std::string> > product;
MB::detail::cartesian_product(table[j - i][k].begin(), table[j - i][k].end(),
table[k + 1][j].begin(),
table[k + 1][j].end(), std::back_inserter(product));
std::vector<MB::rule::const_ptr> rules;
grammar_.get_rules_for_symbols(product.begin(), product.end(),
std::back_inserter(rules));
for (MB::rule::const_ptr r : rules) {
table[j - i][j].insert(r->left());
}
}
}
}
if (os) {
// Print the table
for (i = 0; i < len; i++) {
k = 0;
for (j = len - i - 1; j < len; j++) {
std::copy(table[k][j].begin(), table[k][j].end(),
std::ostream_iterator<std::string>(*os, " "));
++k;
*os << '\t';
}
*os << '\n';
}
for (const std::string& c : str) {
*os << c << '\t';
}
*os << '\n';
}

// Successful if the start symbol is in the top-right cell
return table[0][len - 1].find(grammar_.get_start_left()) != table[0][len - 1].end();
}

private:
const MB::grammar& grammar_;
};

} // namespace MB

#endif // CYK_HPP


Here is an example program:

#include <fstream>
#include <iostream>

#include <grammar.hpp>
#include <cyk.hpp>

int main()
{
std::string filename = "cyk.dat";
std::ifstream ifs(filename);
if (ifs) {
MB::grammar grammar(ifs);
std::string sentence[] = {"b", "a", "a", "b", "a"};
const size_t len = sizeof(sentence) / sizeof(sentence[0]);
bool success = MB::cyk_parser(grammar).parse(sentence, sentence + len, std::cout);
std::cout << "Success: " << std::boolalpha << success << '\n';
}
else {
std::cerr << "Couldn't open " << filename << " for reading\n";
}
}



This is the grammar file:

S -> A B | B C
A -> B A | a
B -> C C | b
C -> A B | a


Here is the output. Notice the start symbol S in the top-left cell, which indicates a successful parse:

A C S
A C S
B       B
A S     B       C S     A S
B       A C     A C     B       A C
b       a       a       b       a
Success: true



Reference: The CYK Algorithm

# Subset-Sum using dynamic programming in C

The Subset-Sum problem is to determine, given a set of integers, whether there is a subset that sums to a given value. The problem is NP-complete, but can be solved in pseudo-polynomial time using dynamic programming.

Below is an implementation in C. The algorithm works by filling in a table. The rows of the table correspond to values from 0 to the target value, while the columns correspond to elements of the set. Each cell in the table repesents a solution for that row’s total using the elements chosen from the columns to the left. When the table has been filled, the truth value in the bottom-right-hand cell will indicate whether there is a successful solution. I have used chains of pointers within the table to allow the elements in the solution to be read back.

struct entry {
unsigned int truth;
int element;
unsigned int count;
struct entry *prev;
};
typedef struct entry entry;

unsigned int subset_sum(const unsigned int *weights, size_t len, unsigned int target,
unsigned int **solution)
{
entry **table;
unsigned int i, j;
unsigned int count = 0;

table = malloc((target + 1) * sizeof(entry *));
for (i = 0; i <= target; i++) {
table[i] = malloc((len + 1 ) * sizeof(entry));
}

for (i = 0; i <= target; i++) {
for (j = 0; j <= len; j++) {
/* First row */
if (i == 0) {
table[i][j].truth = 1;
table[i][j].element = -1;
table[i][j].count = 0;
table[i][j].prev = NULL;
}
else if (j == 0) {
/* First column */
table[i][j].truth = 0;
table[i][j].element = -1;
table[i][j].count = 0;
table[i][j].prev = NULL;
}
else {
/* Initialise for don't add element case */
table[i][j].truth = table[i][j-1].truth;
table[i][j].prev = &table[i][j - 1];
table[i][j].element = -1;
table[i][j].count = table[i][j - 1].count;
if (i >= weights[j-1]) {
/* Can add element */
if (!table[i][j].truth) {
/* Add element */
table[i][j].truth = table[i - weights[j-1]][j-1].truth;
table[i][j].element = j - 1;
table[i][j].count = table[i - weights[j-1]][j-1].count + 1;
table[i][j].prev = &table[i - weights[j-1]][j-1];
}
}
}
}
}

if (!table[target][len].truth) {
/* No solution */
*solution = NULL;
}
else {
/* Solution */
*solution = calloc(len, sizeof(unsigned int));
}
if (*solution) {
/* Read back elements */
count = table[target][len].count;
if (head->element != -1) {
}
}
}

for (i = 0; i <= target; i++) {
free(table[i]);
}
free(table);

return count;
}


Here is an example program:

int main(void)
{
unsigned int weights[] = {5, 7, 1, 3, 11, 9};
unsigned int target = 14;
const size_t len = sizeof(weights) / sizeof(weights[0]);
unsigned int *solution;
const unsigned int count = subset_sum(weights, len, target, &solution);
if (count) {
unsigned int i;
printf("Found a solution\n");
printf("Elements:\n");
for (i = 0; i < len; i++) {
if (solution[i]) {
printf("%u ", weights[i]);
}
}
putchar('\n');
}
else {
printf("No solution\n");
}
free(solution);

return 0;
}


The output:

Found a solution
Elements:
3 11


# Making change using dynamic programming in C

A coin system is canonical if the greedy algorithm for making change is optimal for all values. All coin systems in the world are canonical for obvious reasons. If a coin system is not canonical, the change-making problem becomes NP-hard. We can use dynamic programming to solve the change-making problem for abitrary coin systems.

This is another table-filling algorithm. The table has a column for every value from 0 to the target value, and a column for every coin. Each cell will be filled with the minimum number of the coins from that row upwards that are needed to make that value. When the table has been filled, the minimum number of coins for the target value can be read from the bottom-right-hand cell.

Below is an implemention in C. As with other dynamic programming algorithms, I have used pointers within the table to link each cell to the cell from which its value was derived. These pointers can then be used to read back how many of each coin was used in producing the solution. The function make_change() takes an array of coin values, its length, a target value, and the address of a pointer to which to assign the solution array. The solution array has an entry for each coin and indicates how many of each coin were used in the solution.

#include <stdlib.h>

struct change_entry {
unsigned int count;
int coin;
struct change_entry *prev;
};
typedef struct change_entry change_entry;

unsigned int make_change(const unsigned int *coins, size_t len, unsigned int value,
unsigned int **solution)
{
unsigned int i, j;
change_entry **table;
unsigned int result;

/* Allocate the table */
table = malloc((len + 1) * sizeof(change_entry *));
for (i = 0; i <= len; i++) {
table[i] = malloc((value + 1) * sizeof(change_entry));
}

/* Calculate the values and build chains */
for (i = 0; i <= len; i++) {
for (j = 0; j <= value; j++) {
if (i == 0) {
/* Initialising the first row */
table[i][j].count = j;
table[i][j].coin = -1;
table[i][j].prev = NULL;
}
else if (j == 0) {
/* Initialising the first column */
table[i][j].count = 0;
table[i][j].coin = -1;
table[i][j].prev = NULL;
}
else if (coins[i - 1] == j) {
/* Can use coin */
table[i][j].count = 1;
table[i][j].coin = i - 1;
table[i][j].prev = NULL;
}
else if (coins[i - 1] > j) {
/* Can't use coin */
table[i][j].count = table[i - 1][j].count;
table[i][j].coin = -1;
table[i][j].prev = &table[i - 1][j];
}
else {
/* Can use coin - choose best solution */
if (table[i - 1][j].count < table[i][j - coins[i - 1]].count + 1) {
/* Don't use coin */
table[i][j].count = table[i - 1][j].count;
table[i][j].coin = -1;
table[i][j].prev = &table[i - 1][j];
}
else {
/* Use coin */
table[i][j].count = table[i][j - coins[i - 1]].count + 1;
table[i][j].coin = i - 1;
table[i][j].prev = &table[i][j - coins[i - 1]];
}
}
}
}
result = table[len][value].count;
/* Read back the solution */
*solution = calloc(len, sizeof(unsigned int));
if (*solution) {
for (head = &table[len][value];
if (head->coin != -1) {
}
}
}
else {
result = 0;
}

for (i = 0; i <= len; i++) {
free(table[i]);
}
free(table);

return result;
}


Here is an example program:

int main(void)
{
unsigned int coins[] = {1, 2, 5, 10, 20, 50, 100};
const size_t len = sizeof(coins) / sizeof(coins[0]);
const unsigned int value = 252;
unsigned int *solution;
unsigned int result = make_change(coins, len, value, &solution);
unsigned int i;
printf("Number of coins: %u\n", result);
printf("Coins used:\n");
for (i = 0; i < len; i++) {
if (solution[i] > 0) {
printf("%u x %u\n", solution[i], coins[i]);
}
}
free(solution);
return 0;
}


The output:

Number of coins: 4
Coins used:
1 x 2
1 x 50
2 x 100


# Partition Problem using dynamic programming in C

The Partition Problem is to determine if, given a set of positive integers, there is a subset that sums to precisely half of the sum of the set. For example, given the numbers from 1 to 7, their sum is 28, half of which is 14, and the subset (2, 3, 4, 5} sums to 14, so this is a positive instance. The partition problem is NP-complete, but can be solved in pseudo-polynomial time using dynamic programming.

The algorithm works by constructing an array with entries for every number from 0 to the sum of the set, and filling in the cells that are the sum of some subset. Once complete, the solution can be read back from the middle cell of the array, which corresponds to a subset that sums to half of the total sum. At first, only the 0 cell is filled in, because it is always possible to find a subset that sums to 0 — the empty set. After that, the cells to fill in are found by simulating repeated additions of the numbers from the set to the already-filled-in cells. For example, if the cell for some number $$m$$ is already filled in, and the set contains the number $$n$$, then the cell $$m + n$$ can be filled in. When a cell is filled in, a pointer is set to point back to the number from which it has been derived so that the solution can be read back.

Below is the code in C. The function partition() takes an array of numbers, its length, and the address of a pointer to which to attach the solution array. It returns the number of elements in the solution subset if there is a partition, or 0 otherwise.

struct partition_entry {
unsigned int value;
unsigned int truth;
unsigned int count;
struct partition_entry *prev;
};
typedef struct partition_entry partition_entry;

unsigned int sum(const unsigned int *numbers, size_t len)
{
unsigned int i, total = 0;
for (i = 0; i < len; i++) {
total += numbers[i];
}
}

int compare_unsigned_ints(const unsigned int *p1, const unsigned int *p2)
{
if (*p1 > *p2) {
return 1;
}
if (*p1 < *p2) {
return -1;
}
return 0;
}

unsigned int min(unsigned int first, unsigned int second)
{
if (first < second) {
return first;
}
return second;
}

unsigned int partition(unsigned int *numbers, size_t n, unsigned int **solution)
{
unsigned int total = sum(numbers, n);
int i, j, rightmost = 0, size;
partition_entry *partitions;
if (total % 2 == 1) {
/* Total is odd */
*solution = NULL;
return 0;
}
/* Initialise array */
partitions = malloc((total + 1) * sizeof(partition_entry));
if (partitions == NULL) {
*solution = NULL;
return 0;
}
for (i = 0; i <= total; i++) {
partitions[i].value = i;
partitions[i].truth = 0;
partitions[i].count = 0;
partitions[i].prev = NULL;
}
partitions[0].truth = 1;
/* Sort the numbers */
qsort(numbers, n,
sizeof(unsigned int), (int(*)(const void *, const void *))compare_unsigned_ints);
/* Fill in the sums and build chains */
for (i = 0; i < n; i++) {
for (j = rightmost; j >= 0; j--) {
if (partitions[j].truth && !partitions[j + numbers[i]].truth) {
partitions[j + numbers[i]].truth = 1;
partitions[j + numbers[i]].count = partitions[j].count + 1;
partitions[j + numbers[i]].prev = &partitions[j];
}
}
rightmost = min(total / 2, rightmost + numbers[i]);
}
if (partitions[total / 2].truth) {
/* Success, read back the solution */
partition_entry *ptr = &partitions[total / 2];
unsigned int number = ptr->value;
size = partitions[total / 2].count;
*solution = malloc(size * sizeof(unsigned int));
if (*solution) {
for (i = 0, j = size - 1; i < size; i++, j--) {
ptr = ptr->prev;
(*solution)[j] = number - ptr->value;
number = ptr->value;
}
}
else {
size = 0;
}
}
free(partitions);
return size;
}


Here is an example that looks for a partition of the numbers from 1 to 7:

int main(void)
{
unsigned int numbers[] = {7, 6, 5, 4, 3, 2, 1};
const size_t len = sizeof(numbers) / sizeof(numbers[0]);
unsigned int *solution;
unsigned int i;
unsigned int size = partition(numbers, len, &solution);
printf("Partition size: %d\n", size);
for (i = 0; i < size; i++) {
printf("%u ", solution[i]);
}
putchar('\n');
free(solution);
return 0;
}


The output:

Partition size: 4
2 3 4 5


# 0-1 Knapsack using dynamic programming in C

Recall that the 0-1 Knapsack problem is to fill a knapsack of given capacity with items of given weights and values in order to maximise the value of the knapsack’s contents. This can be solved in pseudo-polynomial time using dynamic programming.

This is another table-filling algorithm. The rows of the table represent items, and the columns are weights of the knapsack.

struct knapsack_entry {
unsigned int value;
unsigned int item;
struct knapsack_entry *prev;
};
typedef struct knapsack_entry knapsack_entry;

unsigned int knapsack(const unsigned int *weights, const unsigned int *values, size_t n,
unsigned int capacity, unsigned int **solution)
{
unsigned int i, j;
knapsack_entry **table;
unsigned int result;

/* Allocate the table */
table = malloc((n + 1) * sizeof(knapsack_entry *));
for (i = 0; i <= n; i++) {
table[i] = malloc((capacity + 1) * sizeof(knapsack_entry));
}

/* Calculate the values and build chains */
for (i = 0; i <= n; i++) {
for (j = 0; j <= capacity; j++) {
if (i == 0 || j == 0) {
/* Initialising the first row or column */
table[i][j].value = 0;
table[i][j].item = 0;
table[i][j].prev = NULL;
}
else if (weights[i - 1] <= j) {
/* Can add item */
if (values[i - 1] + table[i - 1][j - weights[i - 1]].value
> table[i - 1][j].value) {
/* Add item */
table[i][j].value = values[i - 1] + table[i - 1][j - weights[i - 1]].value;
table[i][j].item = i - 1;
table[i][j].prev = &table[i - 1][j - weights[i - 1]];
}
else {
/* Don't add item */
table[i][j].value = table[i - 1][j].value;
table[i][j].item = table[i - 1][j].item;
table[i][j].prev = table[i - 1][j].prev;
}
}
else {
/* Don't add item */
table[i][j].value = table[i - 1][j].value;
table[i][j].item = table[i - 1][j].item;
table[i][j].prev = table[i - 1][j].prev;
}
}
}

/* Read back the solution */
*solution = calloc(n, sizeof(unsigned int));
for (i = 0, head = &table[n][capacity];
}

result = table[n][capacity].value;
for (i = 0; i <= n; i++) {
free(table[i]);
}
free(table);
return result;
}


Here is an example program:

int main(void)
{
unsigned int weights[] = {2, 3, 4, 5, 9};
unsigned int values[] = {3, 4, 5, 8, 10};
const unsigned int capacity = 20;
const size_t n = sizeof(weights) / sizeof(weights[0]);
unsigned int *solution;
unsigned int value = knapsack(weights, values, n, capacity, &solution);
unsigned int i;
printf("Value: %u\n", value);
printf("Items in knapsack:\n");
for (i = 0; i < n; i++) {
if (solution[i]) {
printf("Item %u with weight %u and value %u\n", i, weights[i], values[i]);
}
}
free(solution);
return 0;
}


The output:

Value: 26
Items in knapsack:
Item 0 with weight 2 and value 3
Item 2 with weight 4 and value 5
Item 3 with weight 5 and value 8
Item 4 with weight 9 and value 10


# Bellman-Ford algorithm in C

The Bellman-Ford algorithm finds the shortest path between a single vertex and all other vertices in a graph. This is the same problem that Dijkstra’s algorithm solves, but unlike Dijkstra, the Bellman-Ford algorithm can handle graphs with negative edge weights.

One consequence of negative weights is that a graph can contain a negative cycle, and if this is the case, the shortest path to all vertices reachable from the cycle is negative infinity, because one could traverse the cycle any number of times and keep reducing the path length indefinitely. The Bellman Ford algorithm detects this by performing an extra iteration over the graph, and if this would result in a distance decreasing, it knows that there is a negative cycle.

Below is an implementation in C. It takes a set of weighted arcs, the number of arcs (size), the number of vertices (order), and a starting vertex number. It returns an array of integers containing the distance to every vertex. If the graph contains a negative cycle, it returns NULL.

#include <stdlib.h>
#include <limits.h>

typedef struct {
unsigned int first;
unsigned int second;
int weight;
} weighted_arc;

int *bellman_ford(const weighted_arc *arcs, unsigned int size, unsigned int order,
unsigned int vertex)
{
unsigned int i;
int *distances = malloc(order * sizeof(unsigned int));
unsigned int a, v, negative_cycle = 0;
if (distances == NULL) {
return NULL;
}
/* All distances start infinite */
for (i = 0; i < order; i++) {
distances[i] = INT_MAX;
}
/* Distance to starting vertex is 0 */
distances[vertex] = 0;
/* Relax edges repeatedly */
for (v = 0; v < order; v++) {
for (a = 0; a < size; a++) {
const unsigned int first = arcs[a].first;
const unsigned int second = arcs[a].second;
const int weight = arcs[a].weight;
if (distances[first] != INT_MAX
&& distances[first] + weight < distances[second])
{
distances[second] = distances[first] + weight;
}
}
}
/* Check for negative weight cycle */
for (a = 0; a < size && !negative_cycle; a++) {
const unsigned int first = arcs[a].first;
const unsigned int second = arcs[a].second;
const int weight = arcs[a].weight;
negative_cycle = distances[first] + weight < distances[second];
}
if (negative_cycle) {
free(distances);
distances = NULL;
}

return distances;
}


Here is an example program that computes the distances for the graph shown at the top:

#include <stdio.h>
#include <stdlib.h>

/* Connect two arcs */
void weighted_arc_connect(weighted_arc *arcs, unsigned int first, unsigned int second,
int weight, unsigned int *pos)
{
arcs[*pos].first = first;
arcs[*pos].second = second;
arcs[*pos].weight = weight;
(*pos)++;
}

int main(void)
{
unsigned int size = 9; /* Arcs */
unsigned int order = 6; /* Vertices */
unsigned int i = 0;
int *distances;

weighted_arc *arcs = malloc(size * sizeof(weighted_arc));
weighted_arc_connect(arcs, 0, 1, 5, &i);
weighted_arc_connect(arcs, 0, 3, -2, &i);
weighted_arc_connect(arcs, 1, 2, 1, &i);
weighted_arc_connect(arcs, 2, 3, 2, &i);
weighted_arc_connect(arcs, 2, 4, 7, &i);
weighted_arc_connect(arcs, 2, 5, 3, &i);
weighted_arc_connect(arcs, 3, 1, 2, &i);
weighted_arc_connect(arcs, 4, 3, 3, &i);
weighted_arc_connect(arcs, 4, 5, 10, &i);

distances = bellman_ford(arcs, size, order, 0);
if (distances == NULL) {
printf("Graph contains a negative cycle or ran out of memory.\n");
return 1;
}

for (i = 0; i < order; i++) {
printf("%u: %d\n", i, distances[i]);
}

free(arcs);
free(distances);
return 0;
}


The output:

0: 0
1: 0
2: 1
3: -2
4: 8
5: 4


# Floyd-Warshall algorithm in C

The Floyd-Warshall algorithm calculates the distances between all pairs of vertices in a weighted graph. It is a dynamic programming algorithm very similar to Gauss-Jordan elimination.

Below is an implementation in C. The function takes an array of directed arcs, the size of the graph (number of arcs), and its order (number of vertices). It returns a dynamically allocated 2-dimensional array that is the distance table for all pairs of vertices.

One thing to notice about the implementation is that when you use INT_MAX to represent infinity, you need to be very careful not to do arithmetic on it that will cause it to roll over and become a small number. This is the reason for the checks for INT_MAX in the main calculation.

#include <stdlib.h>
#include <limits.h>

typedef struct {
unsigned int first;
unsigned int second;
int weight;
} weighted_arc;

int **floyd_warshall(const weighted_arc *arcs, unsigned int size, unsigned int order)
{
unsigned int i, j, k;
int **distances = malloc(order * sizeof(int *));
/* Initialise the distance table */
for (i = 0; i < order; i++) {
distances[i] = malloc(order * sizeof(int));
for (j = 0; j < order; j++) {
if (i == j) {
distances[i][j] = 0;
}
else {
distances[i][j] = INT_MAX;
}
}
}
/* Add the distance for each arc */
for (i = 0; i < size; i++) {
distances[arcs[i].first][arcs[i].second] = arcs[i].weight;
}
/* Calculate the rest of the distances */
for (i = 0; i < order; i++) {
for (j = 0; j < order; j++) {
for (k = 0; k < order; k++) {
const int djk = distances[j][k];
const int dji = distances[j][i];
const int dik = distances[i][k];
if (dji != INT_MAX && dik != INT_MAX
&& djk > dji + dik)
{
distances[j][k] = dji + dik;
}
}
}
}
return distances;
}


Here is an example program that calculates the distance table for the graph shown at the top of the page:

#include <stdio.h>
#include <stdlib.h>

/* Connect two arcs */
void weighted_arc_connect(weighted_arc *arcs, unsigned int first, unsigned int second,
int weight, unsigned int *pos)
{
arcs[*pos].first = first;
arcs[*pos].second = second;
arcs[*pos].weight = weight;
(*pos)++;
}

/* Print a distance table */
void print_distances(int **distances, unsigned int order)
{
unsigned int i, j;
for (i = 0; i < order; i++) {
for (j = 0; j < order; j++) {
printf("%d ", distances[i][j]);
}
putchar('\n');
}
}

int main(void)
{
unsigned int size = 5; /* Arcs */
unsigned int order = 4; /* Vertices */
unsigned int i = 0;
int **distances;
weighted_arc *arcs = malloc(size * sizeof(weighted_arc));
weighted_arc_connect(arcs, 0, 2, -2, &i);
weighted_arc_connect(arcs, 1, 0, 4, &i);
weighted_arc_connect(arcs, 1, 2, 3, &i);
weighted_arc_connect(arcs, 2, 3, 2, &i);
weighted_arc_connect(arcs, 3, 1, -1, &i);
distances = floyd_warshall(arcs, size, order);
print_distances(distances, order);
free(arcs);
for (i = 0; i < order; i++) {
free(distances[i]);
}
free(distances);
return 0;
}



The output:

0 -1 -2 0
4 0 2 4
5 1 0 2
3 -1 1 0


# Shortest Common Supersequence via LCS in C

In my Shortest Common Supersequence post I gave a dynamic programming algorithm that computes the SCS in a manner very similar to the Longest Common Subsequence algorithm. Another way of computing the SCS is to find the LCS first, and then insert into it those characters from the two strings that are not in the LCS.

As an example, given the two strings “pAqBrCs”, and “wAxByCz”, their LCS is “ABC”, and we can form their SCS by inserting the remaining characters from each string into it in the correct order. One such SCS would be “pwAqxBryCsz”, but there are several possibilities because it doesn’t matter whether we insert the characters from the first string first or the second string at any point, as long as we add them in the order they occur in that string.

The algorithm just needs to scan the two strings, adding the characters not in the LCS while keeping track of where the LCS characters are, and making sure they are only added once. This is how it looks in C:

#include <string.h>
#include <stdlib.h>

char *shortest_common_supersequence_by_lcs(const char *str1, const char *str2)
{
char *lcs = longest_common_subsequence(str1, str2);
const size_t lcs_len = strlen(lcs);
const size_t scs_len = lcs_len + (strlen(str1) - lcs_len) + (strlen(str2) - lcs_len);
char *scs = malloc(scs_len + 1);
unsigned int lcs_pos = 0, str1_pos = 0, str2_pos = 0, scs_pos = 0;

/* Add the characters up to the last LCS character */
while (lcs_pos < lcs_len) {
while (str1[str1_pos] != lcs[lcs_pos]) {
scs[scs_pos++] = str1[str1_pos++];
}
while (str2[str2_pos] != lcs[lcs_pos]) {
scs[scs_pos++] = str2[str2_pos++];
}
scs[scs_pos++] = lcs[lcs_pos++];
str1_pos++;
str2_pos++;
}
/* Add the characters after the LCS */
while (str1[str1_pos]) {
scs[scs_pos++] = str1[str1_pos++];
}
while (str2[str2_pos]) {
scs[scs_pos++] = str2[str2_pos++];
}
scs[scs_pos] = '\0';
free(lcs);
return scs;
}


An example program:

#include <stdio.h>
#include <stdlib.h>

int main(void)
{
char a[] = "pAqBrCs";
char b[] = "wAxByCz";
char *scs = shortest_common_supersequence_by_lcs(a, b);
printf("Shortest Common Supersequence is %s\n", scs);
free(scs);
return 0;
}


Output:

Shortest Common Supersequence is pwAqxBryCsz