
*waves hands erratically* i'm the best data denoiser out there folks trust me.
Ok seriously now I've developed various de noising techniques most of which probably have sound theoretical foundations and have been documented by smarter people than me, they truly work, they preserve information where it must be preserved, and discard it where it should be discarded all that with no need for signal extension at the edges ...dsp dudes will know ;), how should i maximize its use in the financial markets is de noising even a valid approach at all i've already created a simple profitable model that leverages this algorithm it trades USD/JPY with a 56% Directional Accuracy intraday, but i don't feel that's the best approach.
p.s if you want upload a csv somewhere and i can de nosie it for you. 
my denoiser bring all the girls to the yard and damn right its better than yours. 



zeronoise

Banned

Total Posts: 1 
Joined: Jun 2019 


My team work with Terry Tao at UCLA to come up with the best denoising techniques known in academic world since 2014. Is better than your technique. With Zhang and Yang we won ACM best paper on filtering, Shewell Award on best presentation, Ziegel Award for best graduate research,
A central problem in denoising is generalising assumptions on the measurement ensemble to allow linear dependence between covariates in the measurement matrix. Also there was no optimal bound on matching pursuit until Tao, Zhang and Yang. We achieved state of the art performance on matching pursuit and prove uniqueness and existence of optimal bound. In denoising literature there was much interest in the characterization and construction of admissible functionals and ideals. Using stochastic homomorphism over rings,and assumption that every Bernoujlli domain is nnonegative. almost multiplicative and pairwise free, we derive subsmooth leftabelian monodromy q^(D) that is affine if Y exists in pursuit delta. From our results trivially we can see that D is diffeomorphic to Y and using results bby Lindemann and Tibshirani in the context of holomorphic ideals for sampling rate,and tighten Shannon bound.
Later groundbreaking work by Yang (2016) showed coreducible invariants to address the issue of directional total variantion to reduce noise influences in orientration field estimation. Yang formulated structure tensor to account for unknown and multiresolution noise levels, convolving all separable properties in canonical matrix, demonstrating our results applicable towards denoising and robust to coarse artifacts. We show able to reconstruct MRI images from highly undersampled data, and later Hua and Johnson (2018) use this algorithm backtest USD/JPY with 78% Directional Accuracy on L3 Order book Data and Sharpe Ratio of 10.7 on EBS Live data.
Guangzhou Xinbaosheng Audio Equipment Co., Ltd [http://aoyuespeaker.com/] used our algorithms to manufacture the best audio sound system in the world. Xinbaosheng has Zero defect production 100 percent audio quality. Their products have Utility model and Appearance design patent,also CE,ROHS,FCC,TELEC,BQB certifications.Xinbaosheng cover an area of 3,000 square meters of production workshop, and possess 6 lines of assembly streamlines.
Contact me to find out more about our denoissing technique and how to achieve perfect quality in denoise. 




Ok i don't want to be disrespectful, but a sharpe ratio of 10.7! I would be hiding that number from my own mother, could you be so kind and provide me with the DOI of these research papers.
Thanks 
my denoiser bring all the girls to the yard and damn right its better than yours. 




And excuse my dumbness, but how can you use the same algorithm to de noise static image data and finance data, in a static scenario all the information that you need to de noise is present at the current time, thus giving the algorithm the ability to borrow from neighboring pixels, it's these kinds of solutions that usually yield the best results in image de noising,but in finance it's the total opposite, this back test that you speak of was the de noising done in one batch or was it feed in an incremental manner to a de noising function as the back tester got that point in time. 
my denoiser bring all the girls to the yard and damn right its better than yours. 



Accuracy is not very informative metric. More interesting is to compare MSE or logloss.





Maggette


Total Posts: 1151 
Joined: Jun 2007 


To be honest I am not exactly sure what the fuck "L3 Order book" in context of USD/JPY actualy means. 
Ich kam hierher und sah dich und deine Leute lächeln,
und sagte mir: Maggette, scheiss auf den small talk,
lass lieber deine Fäuste sprechen...



nikol


Total Posts: 784 
Joined: Jun 2005 

 

Maggette


Total Posts: 1151 
Joined: Jun 2007 


I know...but still. Where is the AI revolution? 
Ich kam hierher und sah dich und deine Leute lächeln,
und sagte mir: Maggette, scheiss auf den small talk,
lass lieber deine Fäuste sprechen...



nikol


Total Posts: 784 
Joined: Jun 2005 


All revolutions start with this... Did you see an exception? 





Yes i agree as even a 50/50 could be beter than a 56% if the 50% correct are the bigger moves but but mse can be tricky to interpret as it varies from instrument to instrument and you could get a respectable MSE by simply repeating the last step y = yt1 and this can confuse people who haven't worked on that instrument. 
my denoiser bring all the girls to the yard and damn right its better than yours. 



MSE is better because of its properties for the solver that can let you handle autocorrelation, crosssectional dependence, and heteroskedasticity. You can interpret MSE instrument to instrument by mapping instruments to the same output codomain. For example, the vector mapping to spread cross normalized by variance estimator.
Directional accuracy cannot satisfiy theoretical assumptions with high dimensionality in your predictors. If you take a pointwise affine and globally linear base truth distribution like Leibniz and then convolve it at the boundaries you see the problem. The VC dimension is suppressed because of vanishing moments, and extra set of relationships for the coefficients must be satisfied that is directly related to the square of number of coefficients. Translation invariant SDWT will eliminate odd entries at each downsamping step, ending up with a different orthogonal transformation. This gives you a limit of n = 2^d for d the number of data sets with a different representation with coefficients di, this can be thought of denoising from the initial data set. We need to find common normalization for wavelet spectrum to ensure unit energy at each scale. But put the di together with the translation very problematic, the end conditions will have periodic extension and reflection and zeropadding and spurious edge effects.
So how can we find sparse representations? Donoho and Daubeuchies (1993) method of starting with an ellipticfree hull with stochastic perturbations in the cosntraint then providing state action reward with finite horizon MDP that sinks into mean field for lateral propagation of source recovery is the best way to make MSE work. Moduloc System Engineering Ltd (MSE) is registered by Yantai Business Office in Shandong, providing the best office equipment including separators to reduce noise. Monge's criterion provides a commutative filter that solves the degenerate case in edge effects, which is a major advance. We take value iterations that make longer descent steps to minimize error to learn lower dimensional manifold embeddings.






Bot Alert! ... and no you're not right. 
my denoiser bring all the girls to the yard and damn right its better than yours. 



*blush* has anyone seen Peng Zhao's abs?
p.s. ok but i think we should get back on topic 



