tag:blogger.com,1999:blog-6141980.comments2017-05-24T02:22:59.111-05:00Nuit BlancheIgorhttp://www.blogger.com/profile/17474880327699002140noreply@blogger.comBlogger1484125tag:blogger.com,1999:blog-6141980.post-85552078723309688112017-05-23T17:16:45.628-05:002017-05-23T17:16:45.628-05:00I noticed this paper entitled "Exponential Ca...I noticed this paper entitled "Exponential Capacity in an Autoencoder Neural Network with a Hidden Layer."<br />https://arxiv.org/pdf/1705.07441.pdf<br />I would kinda guess the exponential capacity is because real value weights are used to encode the binary output. With finite precision arithmetic say, 16 bit half floats or 32 bit floats there probably is an optimal number of weights to sum together to get a result. After that you should likely use locality sensitive hashing to switch in different weight vectors. I also noticed recently that the new nVidia GPU chip offers a 120 Tflop half float matrix operation that might be suitable for random projections. If you were lucky you might get say 40 million 65536-point RPs per second out of it. That would be about 4000 times faster than I can get from my dual core CPU using SIMD. SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-11507414279483606542017-05-19T12:52:24.840-05:002017-05-19T12:52:24.840-05:00Does it beat HYPERBAND though?Does it beat HYPERBAND though?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6141980.post-59510053660011431762017-05-16T04:17:42.032-05:002017-05-16T04:17:42.032-05:00I'll read that paper later:
On this video abou...I'll read that paper later:<br />On this video about lasso:<br />https://youtu.be/Hn8NtydkeDs<br /><br />I made this comment:<br /><br />"You are saying that the reconstructed data lies on an L1 manifold. You can learn a manifold using say a single layer neural network autoencoder. Then to reconstruct you can invert the dimensionally reduced data, get the autoencoder to correct it, send it back through the dimensional reduction and correct only the reduced aspect. Just bounce back and forth between the two.<br />Or you could set the manifold to be the moving average of the data which is a very easy manifold to correct to and bounce between the two. Anyway: https://drive.google.com/open?id=0BwsgMLjV0BnhOGNxOTVITHY1U28"SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-54112671618408092032017-05-03T23:33:15.994-05:002017-05-03T23:33:15.994-05:00There is also an algorithm tsunami, not just a dat...There is also an algorithm tsunami, not just a data one!!!<br />Another possibility would be to do computational self-assembly of neural nets. <br />http://www.exa.unicen.edu.ar/escuelapav/cursos/bio/l21.pdf SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-10224157172252653502017-05-03T11:05:15.590-05:002017-05-03T11:05:15.590-05:00Interesting bunch of articles Igor!
Cheers,
RaviInteresting bunch of articles Igor!<br /><br />Cheers,<br />RaviRavi Kiranhttp://www.blogger.com/profile/02116578557275934638noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-53489892968226296242017-05-02T17:30:36.675-05:002017-05-02T17:30:36.675-05:00It would be interesting to see this with Resnets t...It would be interesting to see this with Resnets too.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6141980.post-83470436554465103092017-04-25T23:45:22.461-05:002017-04-25T23:45:22.461-05:00Re: https://openreview.net/pdf?id=HkXKUTVFl
I'...Re: https://openreview.net/pdf?id=HkXKUTVFl<br />I'm trying dropout in relation to the back error projection. Anyway there are tons of ideas to explore, especially if you start using fast random projection algorithms for both the back error projection and the forward aspects of a network.SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-81323322034417661492017-04-24T07:51:28.809-05:002017-04-24T07:51:28.809-05:00Neat, will try.Neat, will try.SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-81701545458796594402017-04-18T09:43:01.071-05:002017-04-18T09:43:01.071-05:00That should have been 75 to 100 million 256-point ...That should have been 75 to 100 million 256-point FWHTs per second on a single GPU. <br /><br />https://estudogeral.sib.uc.pt/bitstream/10316/27842/1/Optimized%20Fast%20Walsh%E2%80%93Hadamard%20Transform.pdf<br /><br />Probably about 1 to 3 million 65536-point FWHTs/sec on a single GPU. 1 65536-point FWHT/sec needs 1 MegaFlop/sec. <br />The only thing you then need for random projections (RP) is to do a random sign flip of the data before the FWHT. For better quality repeat. Including random permutations could give you more entropy per step but is expensive time-wise.<br />One thing you could do is make a "soft" hash table as a possible discriminator for GAN neural nets, or as soft memory for deep networks.<br />https://drive.google.com/open?id=0BwsgMLjV0BnhellXODdLZWlrOWc<br /><br /> <br />SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-33817813511335606082017-04-18T07:33:43.694-05:002017-04-18T07:33:43.694-05:00Hi great work! Can you give me an example how Alex...Hi great work! Can you give me an example how AlexNet arch looks like after using your method?Mosebahttp://www.blogger.com/profile/11055941569637032493noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-52213062660811996622017-04-17T23:56:28.227-05:002017-04-17T23:56:28.227-05:00You can get 5000 65536-point 32 bit floating point...You can get 5000 65536-point 32 bit floating point Fast Walsh Hadamard Transforms per second on a single Intel Celeron core. They must be doing something terribly wrong.<br />https://drive.google.com/open?id=0BwsgMLjV0BnhMG5leTFkYWJMLU0<br /><br />You should be able to get something like 75 million to 100 million 65536-point FWHT's per second on a top of the range GPU. <br /><br />The basic algorithm is:<br /><br /> sub wht(x() as single)<br /> dim as ulongint i,j,k<br /> dim as ulongint hs=1,m=ubound(x) 'n-1<br /> dim as single a,b<br /> while hs<=m 'Walsh Hadamard transform<br /> i=0<br /> while i<=m<br /> k=i+hs<br /> while i<k<br /> a=x(i)<br /> b=x(i+hs)<br /> x(i)=a+b<br /> x(i+hs)=a-b<br /> i+=1<br /> wend<br /> i=k+hs<br /> wend<br /> hs+=hs<br /> wend<br /> end sub SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-16647953914207460952017-04-16T02:42:13.377-05:002017-04-16T02:42:13.377-05:00What can you do about the remaining bit? A parity ...What can you do about the remaining bit? A parity check? Laurent Duvalhttp://www.blogger.com/profile/05343286920474971263noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-13939259629821621102017-04-14T01:10:01.943-05:002017-04-14T01:10:01.943-05:00The specialized hardware these days tends to use 8...The specialized hardware these days tends to use 8 bit or less precision for speed.<br />Some of the early genetic algorithms used bit flipping as a mutation. That sounds naive however really it is a sort of scale free mutation. Where a mutation of an 8 bit unsigned number might be 1,2,4,8,16,32,64,128. A sort of exponential distribution.<br />You can compare that to the scale free mutation random + or - exp(-c*rnd()) where c is a positive number and rnd() returns 0 to 1 uniform. That mutation has uniform density across magnitude p(1)=p(0.1)=p(0.01) etc. <br />So I think random bit flipping is something you can try if you want to evolve deep neural nets of low precision, especially as back propagation is more problematic in such cases.<br />I'm sure I gave this reference before: https://pdfs.semanticscholar.org/c980/dc8942b4d058be301d463dc3177e8aab850e.pdfSeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-31069504223536154792017-03-24T09:48:52.675-05:002017-03-24T09:48:52.675-05:00I pretty sure I gave the link to this scale free e...I pretty sure I gave the link to this scale free evolution strategies algorithm before: https://pdfs.semanticscholar.org/c980/dc8942b4d058be301d463dc3177e8aab850e.pdf<br />There are simple bit hacks you can use to generate that mutation probability distribution as well. <br /><br />I found a good way to use it to evolve deep neural nets: <br />https://groups.google.com/forum/#!topic/artificial-general-intelligence/Nz_qW2FK8QY<br />https://discourse.numenta.org/t/overcoming-catastrophic-forgetting/2009 SeanVNhttp://www.blogger.com/profile/05967727000105480078noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-13740406383893323132017-03-06T05:31:46.392-06:002017-03-06T05:31:46.392-06:00Thanks for sharing the link, Just it is a petty th...Thanks for sharing the link, Just it is a petty that videos are frenchparvane shafieihttp://www.blogger.com/profile/12922227555313953460noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-60189081936696875602017-01-31T15:49:30.623-06:002017-01-31T15:49:30.623-06:00^cool^coolRavi Kiranhttp://www.blogger.com/profile/02116578557275934638noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-37589342852858666452017-01-30T10:35:39.748-06:002017-01-30T10:35:39.748-06:00similar ideas have been presented recently here:
h...similar ideas have been presented recently here:<br />https://papers.nips.cc/paper/6248-wasserstein-training-of-restricted-boltzmann-machinesmarcohttp://www.blogger.com/profile/16496703661299841227noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-89246948274900420642017-01-05T15:44:52.291-06:002017-01-05T15:44:52.291-06:00We at Resonon have just released an airborne hyper...We at Resonon have just released an airborne hyperspectral imaging system specifically designed for UAV's. It is small and lightweight, provides high-precision data, contains all hardware and software to acquire hyperspectral data, and is very affordable. <br /><br />For more information see our website at <a href="http://resonon.com/Products/airborne.html" rel="nofollow">www.resonon.com/Products/airborne.html</a><br /><br />Adam Sternhttp://www.blogger.com/profile/12495036697714985207noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-32675421585625132102016-12-28T03:20:02.229-06:002016-12-28T03:20:02.229-06:00With that, no other cross-posting is necessary on ...With that, no other cross-posting is necessary on my side. ThanxLaurent Duvalhttp://www.blogger.com/profile/05343286920474971263noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-90175068280648246062016-11-26T04:46:34.257-06:002016-11-26T04:46:34.257-06:00https://archive.org/details/bitsavers_mitreESDTe69...https://archive.org/details/bitsavers_mitreESDTe69266ANewMethodofGeneratingGaussianRando_2706065Unknownhttp://www.blogger.com/profile/16235434542222391913noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-18408612401874382162016-11-06T15:37:24.206-06:002016-11-06T15:37:24.206-06:003 typos in the abstract and a Microsoft Word-forma...3 typos in the abstract and a Microsoft Word-formatted document... This doesn't inspire much confidence !Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6141980.post-88681826722266470742016-09-27T16:23:33.582-05:002016-09-27T16:23:33.582-05:00Hi Igor,
Is it possible to get access to the code...Hi Igor,<br /><br />Is it possible to get access to the code?<br /><br />Thanks!杨洋http://www.blogger.com/profile/17619319259447400697noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-58406680599418476472016-09-25T12:53:17.358-05:002016-09-25T12:53:17.358-05:00Thanks Igor! I was really happy with the outcome a...Thanks Igor! I was really happy with the outcome and hope to continue this next year. :)Bob et Carlahttp://www.blogger.com/profile/09965549060044467221noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-34808227810843829012016-09-02T09:41:36.575-05:002016-09-02T09:41:36.575-05:00Hey, Igor!
I think I know the feeling of having d...Hey, Igor!<br /><br />I think I know the feeling of having denied a proposal you believe is good. My paper just got the second refusal. First by Signal Processing and now by Neurocomputing.<br /><br />Well, as my friend Stevie like to say, "gonna keep on tryin' till I reach the highest ground" :)<br /><br />Any comments were I should try to publish it? I uploaded it to the preprint arXiv platform: http://arxiv.org/pdf/1504.06779v2.pdf<br /><br />Thanks,<br />Emerson<br />Emerson Machadohttp://www.blogger.com/profile/14671842785602615674noreply@blogger.comtag:blogger.com,1999:blog-6141980.post-26908414826309391372016-08-15T12:37:16.329-05:002016-08-15T12:37:16.329-05:00Thank you Andrews !
Igor.Thank you Andrews !<br /><br />Igor.Igorhttp://www.blogger.com/profile/17474880327699002140noreply@blogger.com