Friday, April 27, 2018 at 3:30pm
The machine learning community seems increasingly resigned to the prospect of solving large, non-convex optimization problems, particularly when training networks. To an outsider this seems strange, considering the fact that the models are mostly made-up. Until someone proves a theorem that an easily-trained model doesn't exist, shouldn't we try to find one? In this talk I'll propose such a model. It's a feed-forward network with summation at the nodes, and biased rectifiers on every edge. Only the bias parameters are learned. It seems too simple, but I can prove that at least the classes defined by the value of an arbitrary Boolean function can be represented by these "rectified-wire" networks.
The most important property of rectified-wire networks is that they are amenable to "conservative learning." This is an online training protocol, where correct classification is imparted item-by-item with minimal changes to the parameters. In rectified-wire networks the bias updates are the solution to a quadratic program, and computed by making forward and backward passes through the network. The training algorithm, "sequential deactivation" (SDA), is similar to stochastic gradient descent except that the moves are large enough to be useful online.
The talk covers a review of conservative learning, a description of the rectified-wire model and the SDA algorithm, and first results with natural and synthetic datasets.
B.S., 1979, California Institute of Technology; Ph.D., 1984, U.C. Berkeley. Postdoctoral member of Technical Staff, ATT Bell Laboratories, 1984-86; Member, Technical Staff, ATT Bell Laboratories, 1986-88. Assistant Professor, Physics, Cornell University, 1988-93; Associate Professor, Physics, Cornell University, 1993-2001; Professor, Physics, Cornell University, 2001-present. Visiting professor, Universität Tübingen, 1994-95. William L. McMillan Prize, 1988; Alfred P. Sloan Fellow, 1989-92; Presidential Young Investigator, 1989-94; David and Lucille Packard Fellow, 1989-94; Guggenheim Fellow, 1994-95; Alexander von Humboldt Fellow, 1994-95; Erskine Fellow 2010. Simons Fellow in Theoretical Physics, 2015.