c4se記:さっちゃんですよ☆

.。oO(さっちゃんですよヾ(〃l _ l)ノ゙☆)

.。oO(此のblogは、主に音樂考察Programming に分類されますよ。ヾ(〃l _ l)ノ゙♬♪♡)

音樂は SoundCloud に公開中です。

考察は現在は主に Scrapbox で公表中です。

Programming は GitHub で開發中です。

符号化algorithmを実験してみた (fano符号、shannon符号、huffman符号)

Rubyで。
過日、[情報理論やりなおし勉強会(3) : ATND http://atnd.org/events/38995 ]にて、情報理論の基礎―情報と学習の直観的理解のために (SGC Books)の第四章第一部をやった。そこで符号化algorithmの話題が出ていたので、気が向いて実験してみた。fano符号、shannon符号、huffman符号である。

#!ruby
# encoding=utf-8

# 情報理論やりなおし勉強会(3) : ATND http://atnd.org/events/38995
# fano符号, shannon符号, huffman符号

class Numeric
  def to_binary_s i
    "%.#{i.to_i}b" % (self * 2 ** i).floor
  end
end

class Array
  def split_to_half
    sum = self.reduce{|acum, elm| acum + elm }
    i = 0
    self.reduce do |acum, pr|
      break if acum > sum / 2.0
      i += 1
      acum + pr
    end
    i = 1 if i <= 0
    [self.slice(0, i), self.slice(i, self.length)]
  end

  def split_to_half_recursive
    first, second = self.split_to_half
    [first.length == 1 ? first : first.split_to_half_recursive,
     second.length == 1 ? second : second.split_to_half_recursive]
  end
end

class Coder
  @@encoders = {}

  # param [Array] probabilities number[]
  # return [number]
  def self.entropy probabilities
    probabilities.reduce{|acum, pr| acum - pr * Math.log(pr, 2) }
  end

  # Define an encoder.
  #
  # ex.)
  #   Coder.encoder :encoder_name do |probabilities|
  #     ['0', '10', '110', '111']
  #   end
  # param [symbol] encoder_name
  # param [block] &proc number[]->string[]
  def self.encoder encoder_name, &proc
    @@encoders[encoder_name] = proc
  end

  # param [Array] probabilities number[]
  # return [Coder]
  def initialize probabilities
    @probabilities = probabilities.sort.reverse
    @codes = {}
  end

  # param [symbol] encoder_name
  # return [Array] string[]
  def encode encoder_name
    raise %Q{Unknown encoder name "#{name}".} unless @@encoders.has_key? encoder_name
    @codes[encoder_name] = @@encoders[encoder_name].call @probabilities
  end

  # param [symbol] encoder_name
  # return [number]
  def kl_divergence encoder_name
    @codes[encoder_name] = encode encoder_name unless @codes.has_key? encoder_name
    @probabilities.zip(@codes[encoder_name].map{|c| 2 ** -c.length }).reduce(0){|acum, pr|
      acum + pr[0] * Math.log(pr[0] / pr[1], 2)
    }
  end
end

Coder.encoder :fano do |pr|
  def codes pr
    return [''] if pr[0].is_a? Numeric
    codes(pr[0]).map{|c| "0#{c}" } + codes(pr[1]).map{|c| "1#{c}" }
  end
  codes pr.split_to_half_recursive
end

Coder.encoder :shannon do |pr|
  a = pr.reduce([0]){|acum, pr| acum << acum.last + pr }
  a.slice(0, a.length - 1).zip(pr).map{|a, pr|
    a.to_binary_s Math.log(1 / pr, 2).ceil
  }
end

Coder.encoder :huffman do |pr|
  def codes pr
    return ['0', '1'].slice(0, pr.length) if pr.length <= 2
    ['0', *codes(pr.slice 1, pr.length).map{|c| "1#{c}" }]
  end
  codes pr
end

# param [Array] probabilities number[]
# param [symbol] encoder_name
def print_codes probabilities, encoder_name
  coder = Coder.new probabilities
  codes = coder.encode encoder_name
  puts <<"EOF"
===
prob:\t[#{probabilities.join ', '}]
entropy:\t#{Coder.entropy probabilities}
#{encoder_name}:\t[#{codes.join ', '}]
Kullback-Leibler divergence:\t#{coder.kl_divergence encoder_name}
===
EOF
end

print_codes [0.5, 0.25, 0.125, 0.125], :fano
print_codes [0.5, 0.25, 0.125, 0.125], :shannon
print_codes [0.5, 0.25, 0.125, 0.125], :huffman

print_codes [0.4, 0.3, 0.2, 0.1], :fano
print_codes [0.4, 0.3, 0.2, 0.1], :shannon
print_codes [0.4, 0.3, 0.2, 0.1], :huffman

print_codes [0.025, 0.075, 0.3, 0.6], :fano
print_codes [0.025, 0.075, 0.3, 0.6], :shannon
print_codes [0.025, 0.075, 0.3, 0.6], :huffman

結果

===
prob:	[0.5, 0.25, 0.125, 0.125]
entropy:	1.75
fano:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.0
===
===
prob:	[0.5, 0.25, 0.125, 0.125]
entropy:	1.75
shannon:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.0
===
===
prob:	[0.5, 0.25, 0.125, 0.125]
entropy:	1.75
huffman:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.0
===
===
prob:	[0.4, 0.3, 0.2, 0.1]
entropy:	1.7176681067160706
fano:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.05356065532898457
===
===
prob:	[0.4, 0.3, 0.2, 0.1]
entropy:	1.7176681067160706
shannon:	[00, 01, 101, 1110]
Kullback-Leibler divergence:	0.5535606553289847
===
===
prob:	[0.4, 0.3, 0.2, 0.1]
entropy:	1.7176681067160706
huffman:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.05356065532898457
===
===
prob:	[0.025, 0.075, 0.3, 0.6]
entropy:	1.268541454312051
fano:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.12341034331576489
===
===
prob:	[0.025, 0.075, 0.3, 0.6]
entropy:	1.268541454312051
shannon:	[0, 10, 1110, 111110]
Kullback-Leibler divergence:	0.2734103433157649
===
===
prob:	[0.025, 0.075, 0.3, 0.6]
entropy:	1.268541454312051
huffman:	[0, 10, 110, 111]
Kullback-Leibler divergence:	0.12341034331576489
===

よろしい。
過日には、符号化algorithmを選択する事と、情報源のmodelを決定する事は、(計算可能な領域では) 等しいのではないか、と云う事を勝手に学んだ。


cf. hagino3000's blog: 情報理論の基礎4章メモ、エントロピーからHuffman符号まで http://hagino3000.blogspot.jp/2013/05/4.html 当日参加していらした@さんによるPython版。