This page uses content from Wikipedia and is licensed under CC BYSA.
In computer science, the Wagner–Fischer algorithm is a dynamic programming algorithm that computes the edit distance between two strings of characters.
The Wagner–Fischer algorithm has a history of multiple invention. Navarro lists the following inventors of it, with date of publication, and acknowledges that the list is incomplete:^{[1]}^{:43}
The Wagner–Fischer algorithm computes edit distance based on the observation that if we reserve a matrix to hold the edit distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix by flood filling the matrix, and thus find the distance between the two full strings as the last value computed.
A straightforward implementation, as pseudocode for a function EditDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them, looks as follows. Note that the input strings are oneindexed, while the matrix d is zeroindexed, and [i..k]
is a closed range.
int EditDistance(char s[1..m], char t[1..n])
// For all i and j, d[i,j] will hold the Levenshtein distance between
// the first i characters of s and the first j characters of t.
// Note that d has (m+1) x (n+1) values.
let d be a 2d array of int with dimensions [0..m, 0..n]
for i in [0..m]
d[i, 0] ← i // the distance of any first string to an empty second string
// (transforming the string of the first i characters of s into
// the empty string requires i deletions)
for j in [0..n]
d[0, j] ← j // the distance of any second string to an empty first string
for j in [1..n]
for i in [1..m]
if s[i] = t[j] then
d[i, j] ← d[i1, j1] // no operation required
else
d[i, j] ← minimum of
(
d[i1, j] + 1, // a deletion
d[i, j1] + 1, // an insertion
d[i1, j1] + 1 // a substitution
)
return d[m,n]
Two examples of the resulting matrix (hovering over an underlined number reveals the operation performed to get that number):


The invariant maintained throughout the algorithm is that we can transform the initial segment s[1..i]
into t[1..j]
using a minimum of d[i,j]
operations. At the end, the bottomright element of the array contains the answer.
As mentioned earlier, the invariant is that we can transform the initial segment s[1..i]
into t[1..j]
using a minimum of d[i,j]
operations. This invariant holds since:
s[1..i]
can be transformed into the empty string t[1..0]
by simply dropping all i
characters. Similarly, we can transform s[1..0]
to t[1..j]
by simply adding all j
characters.s[i] = t[j]
, and we can transform s[1..i1]
to t[1..j1]
in k
operations, then we can do the same to s[1..i]
and just leave the last character alone, giving k
operations.s[1..i]
to t[1..j1]
in k
operations, then we can simply add t[j]
afterwards to get t[1..j]
in k+1
operations (insertion).s[1..i1]
to t[1..j]
in k
operations, then we can remove s[i]
and then do the same transformation, for a total of k+1
operations (deletion).s[1..i1]
to t[1..j1]
in k
operations, then we can do the same to s[1..i]
, and exchange the original s[i]
for t[j]
afterwards, for a total of k+1
operations (substitution).s[1..n]
into t[1..m]
is of course the number required to transform all of s
into all of t
, and so d[n,m]
holds our result.This proof fails to validate that the number placed in d[i,j]
is in fact minimal; this is more difficult to show, and involves an argument by contradiction in which we assume d[i,j]
is smaller than the minimum of the three, and use this to show one of the three is not minimal.
Possible modifications to this algorithm include:
j
.[0,1]
.cost
values can be computed in parallel, and the algorithm can be adapted to perform the minimum
function in phases to eliminate dependencies.By initializing the first row of the matrix with zeros, we obtain a variant of the Wagner–Fischer algorithm that can be used for fuzzy string search of a string in a text.^{[1]} This modification gives the endposition of matching substrings of the text. To determine the startposition of the matching substrings, the number of insertions and deletions can be stored separately and used to compute the startposition from the endposition.^{[4]}
The resulting algorithm is by no means efficient, but was at the time of its publication (1980) one of the first algorithms that performed approximate search.^{[1]}