Forums  > Software  > Big Data and Deep Learning, a technology revolution in trading or yet another hype?  
     
Page 2 of 2Goto to page: 1, [2] Prev
Display using:  

katastrofa


Total Posts: 360
Joined: Jul 2008
 
Posted: 2016-10-26 11:40
http://www.reuters.com/article/us-exchanges-surveillance-ai-idUSKCN12P0FJ

rod


Total Posts: 370
Joined: Nov 2006
 
Posted: 2016-10-26 19:40
jslade: "The people who actually invented it: Hinton, LeCun, Bottou: they are most emphatically not saying crazy things like this, even though there are decent motivations for them to hype it up. In fact, Yann has gone on the record that people should chill out with the crazy claims, rightly drawing the parallel with the first AI winter."

Long before the AI Winter, back in 1956, Shannon wrote the following 1-page paper:



Still applicable.

chiral3
Founding Member

Total Posts: 5002
Joined: Mar 2004
 
Posted: 2016-10-26 19:46
Rod, good stuff.

Nonius is Satoshi Nakamoto. 物の哀れ

finanzmaster


Total Posts: 119
Joined: Feb 2011
 
Posted: 2016-10-26 22:40
Indeed, an awesome essay!

My 2cents:

0) "that the basic results of the subject are aimed in a very specific direction, a direction that is not necessarily relevant to such fields as psychology, economics"...
Interestingly, that Kelly criterion is actually Kelly-Shannon.

1) "a through understanding of the mathematical foundation is surely a prerequisite to other application".
Well, I read somewhere (cannot find a link anymore) that if one would bother with proving everything about convolutional neural networks, that there would be no such impressive practical applications.

2) "a few first rate research papers are preferable to a large number that are poorly conceived or half-finished".
For nowadays I would state it even more radical: a few first class (passionate) researchers are preferable to a grey mass of lazy mediocres.


www.yetanotherquant.com - Knowledge rather than Hope: A Book for Retail Investors and Mathematical Finance Students

chiral3
Founding Member

Total Posts: 5002
Joined: Mar 2004
 
Posted: 2016-10-30 23:49
George Holz. Hello from China.


Fcuk the US

Disrupt Bitchez

Nonius is Satoshi Nakamoto. 物の哀れ

rftx713


Total Posts: 87
Joined: May 2016
 
Posted: 2016-10-31 03:48
@chiral3, you have a pretty interesting point - or I think you do, I admit I didn't catch the obvious conclusion. With your last point, I feel like what you're suggesting is actually that this shift in consumption might end up facilitating the use of these sitcom/music generators?

Rashomon


Total Posts: 166
Joined: Mar 2011
 
Posted: 2017-01-05 01:11

EspressoLover: But the killer app isn't classification, it's fantasizing.


You can have a regular statistical model "fantasize" by resampling with appropriate weights. This seems to fall under Tibshirani's joke that machine learning just has much better branding than stats.

Am I missing something?

In ten years, it's pretty feasible that some variant of recurrent-nets will be able to generate spit out mediocre sitcom episodes or formulaic pop songs on demand.


Now this sounds like exaggeration again. So deep-learned models leapt ahead of hand-tuned ones in eg language. Why does that mean that we will see either gradual or quantum progress again and again for 10 years? If the deep-net does not think like humans, how can humans improve on its engineering? Architecture hacking? (sounds too fancy)









jslade: I have yet to see a commercial application of it that justifies the hype being created by Google and Facebook


Self-driving lorries would seem to be the closest to completion with an obvious few steps to commercialization.

Otto does, though, remind me of WeatherBill: the idea makes absolute sense on its face and in a SV deal-room you could pitch a VC (who is most startup people's real customer) on it. But WeatherBill (founded by google alumni) made some substantive changes based on where the market actually was (I think changing name to ~ClimateCorp) and the "success" was a (big $) acquisition by Monsanto, not its own self-sustaining revenue stream.

It's also not clear to me that the inventor will get the rewards. Mark Rothko doesn't make £10M when his 1961 painting sells for £10M.








will [not] be able to derive actionable meaning from.


And what's more, the best statisticians of the 20th century (who got consulting contracts) all warned against data-mining. AND said how hard processing the data-set will be. I think actually both of these are in R's documentation.





jslade: You can already spit out formulaic pop songs using something as simple as LZW compression as the core predictor algorithm. In fact, you can barf out pretty convincing classical music this way. Video games already do this. This trick has existed since the 80s at least.


Have you got a link? I thought Nintendo simply hired 8-bit orchestral geniuses. (There is some pretty cool bit-chip music being written these days, even translating Tom Waits into bit-chip.)




katastrofa: lower rates for optimistic tweeters


Easily hacked (deep-dream a twitter account to register with your insurer).

Putting a black-box before security is one of the easy signals for me of a geeky idea that will never achieve more than enticing people into grad school.








@jslade: Magic box ideas are silly.
@chiral3: tools instead of problems
@rod: shannon


👌 👌 👌

"What you gonna dooo, without your ass?" ~ Sun Ra

jslade


Total Posts: 1089
Joined: Feb 2007
 
Posted: 2017-01-06 07:07
Rashomon; for LZW inspired classical music, see for example this paper/code:

http://www.cs.technion.ac.il/~ronbeg/vmm/index.html

Compression algorithms are excellent at reproducing the time dependent (or multivariate if you want to think of it that way) statistical distribution of sequences of categorical data. This is kind of "duh" once you realize what a compression algorithm does, but it needs to be stated from time to time. It's little known they're also very good in generative/predictive mode, hence, using them to generate shitty Mozart. People even use them to construct t-tests on very sloppy distributions, with excellent results. For some reason it's only common knowledge to Kolmogorov students, and Martin-Löf fanboys like me.

FWIIW, I don't know as the self driving trucks even qualify as an application of DL. Supposedly Google very recently changed its natural language processing algo to a DL inspired one. At least that's what the NYT says. They rarely get anything right about anything, so it's only a possibility. Still doesn't justify the hype, and certainly unnecessary for their bottom line. They probably could have as easily built a PGM to do the same thing if they hired Michael Jordan grad students instead of Hinton ones.

"Learning, n. The kind of ignorance distinguishing the studious."

EspressoLover


Total Posts: 237
Joined: Jan 2015
 
Posted: 2017-01-06 10:43
@rashomon

> You can have a regular statistical model "fantasize" by resampling with appropriate weights

Well, I'm assuming we're talking about generative models here. Remember many of the common models are purely discriminative. Regression, trees, SVM, random fields, vanilla nets, etc. There's no way to assign weights to points in X.

But even compared to other generative models, deep learning still has a sizable advantage. Particularly with very high-dimensional data, like images or speech. While you can still theoretically apply Gibbs sampling or really any MCMC algorithm, the process becomes computationally intractable. In high-dimensions most of the probability weight lives on a thin shell. The proportion of random steps that miss goes to 1.0, and the mixture time is O(2^d).

In contrast RBMs contain some nice properties which make (approximate) sampling dirt cheap. Those properties can pretty much be extended to any other type of deep-net.
Previous Thread :: Next Thread 
Page 2 of 2Goto to page: 1, [2] Prev