Forums  > Books & Papers  > Interesting paper on a new family of deep ANNs: Neural Ordinal Differential Equations  
     
Page 1 of 1
Display using:  

Maggette


Total Posts: 1138
Joined: Jun 2007
 
Posted: 2019-01-13 13:44
Hi,

just stumbled on this kind of research
here

Sounds interesting. Anybody an opinion?

Edit: somehow thinks that could be interesting in a time series context.
Thx

Ich kam hierher und sah dich und deine Leute lächeln, und sagte mir: Maggette, scheiss auf den small talk, lass lieber deine Fäuste sprechen...

finanzmaster


Total Posts: 168
Joined: Feb 2011
 
Posted: 2019-05-11 20:53
Well, if Raj addresses this topic then it might be worthy to have a closer look.

However, the more stuff I (have to) learn, the more I recall an outstanding professor, who taught us the strategic management. In particular, he emphasized that most of successfuly companies adhere not to the strategy of the 1st step but rather to a strategy of the (fast) 2nd step.

Concretely it means that I will watch Raj's video to grasp the main idea quickly, however, I will dwell only after this new approach shows a couple of successful applications (and moreover, after they implement it in Keras :))

www.yetanotherquant.com - Knowledge rather than Hope: A Book for Retail Investors and Mathematical Finance Students

bullero


Total Posts: 41
Joined: Feb 2018
 
Posted: 2019-05-11 21:22
So this is basically just a network with infinite number of layers?

Edit: I mean, what is the "new" idea here besides additional number of mathematical layers. Sounds like a pure mathematical trick where one changes the problem domain from optimization to finite difference world. Am I missing something?

nikol


Total Posts: 749
Joined: Jun 2005
 
Posted: 2019-05-11 22:03
deviation:

NN pruning improves things
https://www.engadget.com/2019/05/06/mit-researchers-discover-neural-subnetworks/

It is like with multi-variate fit, where parameters are added/removed after they are found to be significant or irrelevant

finanzmaster


Total Posts: 168
Joined: Feb 2011
 
Posted: 2019-05-11 22:29
>So this is basically just a network with infinite number of layers?
Well, yes, but as Raj means in his video: one may (probably) apply the whole arsenal of ODE stuff to train such networks.

www.yetanotherquant.com - Knowledge rather than Hope: A Book for Retail Investors and Mathematical Finance Students

bullero


Total Posts: 41
Joined: Feb 2018
 
Posted: 2019-05-11 22:42
Yeah that's basically changing the problem domain from optimisation to ODE solving
Previous Thread :: Next Thread 
Page 1 of 1