Forums  > Software  > Big Data and Deep Learning, a technology revolution in trading or yet another hype?  
Page 2 of 2Goto to page: 1, [2] Prev
Display using:  


Total Posts: 459
Joined: Jul 2008
Posted: 2016-10-26 11:40


Total Posts: 382
Joined: Nov 2006
Posted: 2016-10-26 19:40
jslade: "The people who actually invented it: Hinton, LeCun, Bottou: they are most emphatically not saying crazy things like this, even though there are decent motivations for them to hype it up. In fact, Yann has gone on the record that people should chill out with the crazy claims, rightly drawing the parallel with the first AI winter."

Long before the AI Winter, back in 1956, Shannon wrote the following 1-page paper:

Still applicable.

Founding Member

Total Posts: 5087
Joined: Mar 2004
Posted: 2016-10-26 19:46
Rod, good stuff.

Nonius is Satoshi Nakamoto. 物の哀れ


Total Posts: 168
Joined: Feb 2011
Posted: 2016-10-26 22:40
Indeed, an awesome essay!

My 2cents:

0) "that the basic results of the subject are aimed in a very specific direction, a direction that is not necessarily relevant to such fields as psychology, economics"...
Interestingly, that Kelly criterion is actually Kelly-Shannon.

1) "a through understanding of the mathematical foundation is surely a prerequisite to other application".
Well, I read somewhere (cannot find a link anymore) that if one would bother with proving everything about convolutional neural networks, that there would be no such impressive practical applications.

2) "a few first rate research papers are preferable to a large number that are poorly conceived or half-finished".
For nowadays I would state it even more radical: a few first class (passionate) researchers are preferable to a grey mass of lazy mediocres. - Knowledge rather than Hope: A Book for Retail Investors and Mathematical Finance Students

Founding Member

Total Posts: 5087
Joined: Mar 2004
Posted: 2016-10-30 23:49
George Holz. Hello from China.

Fcuk the US

Disrupt Bitchez

Nonius is Satoshi Nakamoto. 物の哀れ


Total Posts: 106
Joined: May 2016
Posted: 2016-10-31 03:48


Total Posts: 202
Joined: Mar 2011
Posted: 2017-01-05 01:11

EspressoLover: But the killer app isn't classification, it's fantasizing.

You can have a regular statistical model "fantasize" by resampling with appropriate weights. This seems to fall under Tibshirani's joke that machine learning just has much better branding than stats.

Am I missing something?

In ten years, it's pretty feasible that some variant of recurrent-nets will be able to generate spit out mediocre sitcom episodes or formulaic pop songs on demand.

Now this sounds like exaggeration again. So deep-learned models leapt ahead of hand-tuned ones in eg language. Why does that mean that we will see either gradual or quantum progress again and again for 10 years? If the deep-net does not think like humans, how can humans improve on its engineering? Architecture hacking? (sounds too fancy)

jslade: I have yet to see a commercial application of it that justifies the hype being created by Google and Facebook

Self-driving lorries would seem to be the closest to completion with an obvious few steps to commercialization.

Otto does, though, remind me of WeatherBill: the idea makes absolute sense on its face and in a SV deal-room you could pitch a VC (who is most startup people's real customer) on it. But WeatherBill (founded by google alumni) made some substantive changes based on where the market actually was (I think changing name to ~ClimateCorp) and the "success" was a (big $) acquisition by Monsanto, not its own self-sustaining revenue stream.

It's also not clear to me that the inventor will get the rewards. Mark Rothko doesn't make £10M when his 1961 painting sells for £10M.

will [not] be able to derive actionable meaning from.

And what's more, the best statisticians of the 20th century (who got consulting contracts) all warned against data-mining. AND said how hard processing the data-set will be. I think actually both of these are in R's documentation.

jslade: You can already spit out formulaic pop songs using something as simple as LZW compression as the core predictor algorithm. In fact, you can barf out pretty convincing classical music this way. Video games already do this. This trick has existed since the 80s at least.

Have you got a link? I thought Nintendo simply hired 8-bit orchestral geniuses. (There is some pretty cool bit-chip music being written these days, even translating Tom Waits into bit-chip.)

katastrofa: lower rates for optimistic tweeters

Easily hacked (deep-dream a twitter account to register with your insurer).

Putting a black-box before security is one of the easy signals for me of a geeky idea that will never achieve more than enticing people into grad school.

@jslade: Magic box ideas are silly.
@chiral3: tools instead of problems
@rod: shannon

👌 👌 👌

"What you gonna dooo, without your ass?" ~ Sun Ra


Total Posts: 1182
Joined: Feb 2007
Posted: 2017-01-06 07:07
Rashomon; for LZW inspired classical music, see for example this paper/code:

Compression algorithms are excellent at reproducing the time dependent (or multivariate if you want to think of it that way) statistical distribution of sequences of categorical data. This is kind of "duh" once you realize what a compression algorithm does, but it needs to be stated from time to time. It's little known they're also very good in generative/predictive mode, hence, using them to generate shitty Mozart. People even use them to construct t-tests on very sloppy distributions, with excellent results. For some reason it's only common knowledge to Kolmogorov students, and Martin-Löf fanboys like me.

FWIIW, I don't know as the self driving trucks even qualify as an application of DL. Supposedly Google very recently changed its natural language processing algo to a DL inspired one. At least that's what the NYT says. They rarely get anything right about anything, so it's only a possibility. Still doesn't justify the hype, and certainly unnecessary for their bottom line. They probably could have as easily built a PGM to do the same thing if they hired Michael Jordan grad students instead of Hinton ones.

"Learning, n. The kind of ignorance distinguishing the studious."


Total Posts: 375
Joined: Jan 2015
Posted: 2017-01-06 10:43

> You can have a regular statistical model "fantasize" by resampling with appropriate weights

Well, I'm assuming we're talking about generative models here. Remember many of the common models are purely discriminative. Regression, trees, SVM, random fields, vanilla nets, etc. There's no way to assign weights to points in X.

But even compared to other generative models, deep learning still has a sizable advantage. Particularly with very high-dimensional data, like images or speech. While you can still theoretically apply Gibbs sampling or really any MCMC algorithm, the process becomes computationally intractable. In high-dimensions most of the probability weight lives on a thin shell. The proportion of random steps that miss goes to 1.0, and the mixture time is O(2^d).

In contrast RBMs contain some nice properties which make (approximate) sampling dirt cheap. Those properties can pretty much be extended to any other type of deep-net.

Good questions outrank easy answers. -Paul Samuelson


Total Posts: 168
Joined: Feb 2011
Posted: 2018-10-27 11:46
A concrete example: as I expected, AIEQ (an AI-driven ETF with IBMWatson in backend) has lost to its benchmark as soon as the market volatility regime switched - Knowledge rather than Hope: A Book for Retail Investors and Mathematical Finance Students


Total Posts: 241
Joined: Mar 2018
Posted: 2018-11-24 14:01

> The actual applications of DL, AI, robo, and BD are really more in the sales / distribution space, not running the businesses or the trading / investing side.

Why do you say the sales and distribution space?

Founding Member

Total Posts: 5087
Joined: Mar 2004
Posted: 2018-11-24 14:43
That comment was over two years ago... I think I what I was referring to was relegated to insurtech and the asset management associated with it. I still think it has been largely true. If you look at that part of the institutional space and weight it by assets it is still largely inaccessible. If you look at the line that divides accessible vs. inaccessible it largely correlates with high capital vs. low-capital businesses, which is also a good proxy for regulatory complexity. (This latter point is the reason why you see very few activist investors in the space, at least in the US; the regulators almost guarantee a level of inefficiency.)

Basic asset allocation is free these days, so we've seen this become an unmanned endeavor accessible by the masses. Most insurtech have been simple capital-lite products (auto and home insurance and gambling masquerading as insurance, e.g., flight insurance, which is basically paying for dyadic completion when you feel fucked over and pissed off). Where we've seen it applied to capital-heavy biz is only on the front end (sales and distribution). In terms of sales at IBD there have been some meager efforts in working of the textual corpus of regulatory filings to structure trades.

I think there are some ideas out there that are absolutely great in insurtech, but I am not sure if they will realize. For a variety of valid reasons; sometimes just being that until an adult comes along, nobody is going to give these kids money. An example of an idea I think is totally great for people and that disintermediates a whole overpriced industry? A system of tontines on the BC to support income for people.

Nonius is Satoshi Nakamoto. 物の哀れ
Previous Thread :: Next Thread 
Page 2 of 2Goto to page: 1, [2] Prev