---
res:
bibo_abstract:
- 'We address the following question: How redundant is the parameterisation of
ReLU networks? Specifically, we consider transformations of the weight space which
leave the function implemented by the network intact. Two such transformations
are known for feed-forward architectures: permutation of neurons within a layer,
and positive scaling of all incoming weights of a neuron coupled with inverse
scaling of its outgoing weights. In this work, we show for architectures with
non-increasing widths that permutation and scaling are in fact the only function-preserving
weight transformations. For any eligible architecture we give an explicit construction
of a neural network such that any other network that implements the same function
can be obtained from the original one by the application of permutations and rescaling. The
proof relies on a geometric understanding of boundaries between linear regions
of ReLU networks, and we hope the developed mathematical tools are of independent
interest.@eng'
bibo_authorlist:
- foaf_Person:
foaf_givenName: Phuong
foaf_name: Bui Thi Mai, Phuong
foaf_surname: Bui Thi Mai
foaf_workInfoHomepage: http://www.librecat.org/personId=3EC6EE64-F248-11E8-B48F-1D18A9856A87
- foaf_Person:
foaf_givenName: Christoph
foaf_name: Lampert, Christoph
foaf_surname: Lampert
foaf_workInfoHomepage: http://www.librecat.org/personId=40C20FD2-F248-11E8-B48F-1D18A9856A87
orcid: 0000-0001-8622-7887
dct_date: 2020^xs_gYear
dct_language: eng
dct_title: Functional vs. parametric equivalence of ReLU networks@
...