Tuesday, December 29, 2020

The Awesome Implicit Neural Representations Highly Technical Reference Page

** Nuit Blanche is now on Twitter: @NuitBlog ** 

Here is a new curated page on the topic of Implicit Neural Representations aptly called Awesome Implicit Neural Representations. It is curated by Vincent Sitzmann (@vincesitzmann) and has been added to the Highly Technical Reference Page:





From the page:

A curated list of resources on implicit neural representations, inspired by awesome-computer-vision. Work-in-progress.

This list does not aim to be exhaustive, as implicit neural representations are a rapidly evolving & growing research field with hundreds of papers to date.

Instead, this list aims to list papers introducing key concepts & foundations of implicit neural representations across applications. It's a great reading list if you want to get started in this area!

For most papers, there is a short summary of the most important contributions.

Disclosure: I am an author on the following papers:

What are implicit neural representations?

Implicit Neural Representations (sometimes also referred to coordinate-based representations) are a novel way to parameterize signals of all kinds. Conventional signal representations are usually discrete - for instance, images are discrete grids of pixels, audio signals are discrete samples of amplitudes, and 3D shapes are usually parameterized as grids of voxels, point clouds, or meshes. In contrast, Implicit Neural Representations parameterize a signal as a continuous function that maps the domain of the signal (i.e., a coordinate, such as a pixel coordinate for an image) to whatever is at that coordinate (for an image, an R,G,B color). Of course, these functions are usually not analytically tractable - it is impossible to "write down" the function that parameterizes a natural image as a mathematical formula. Implicit Neural Representations thus approximate that function via a neural network.

Why are they interesting?

Implicit Neural Representations have several benefits: First, they are not coupled to spatial resolution anymore, the way, for instance, an image is coupled to the number of pixels. This is because they are continuous functions! Thus, the memory required to parameterize the signal is independent of spatial resolution, and only scales with the complexity of the underyling signal. Another corollary of this is that implicit representations have "infinite resolution" - they can be sampled at arbitrary spatial resolutions.

This is immediately useful for a number of applications, such as super-resolution, or in parameterizing signals in 3D and higher dimensions, where memory requirements grow intractably fast with spatial resolution.

However, in the future, the key promise of implicit neural representations lie in algorithms that directly operate in the space of these representations. In other words: What's the "convolutional neural network" equivalent of a neural network operating on images represented by implicit representations? Questions like these offer a path towards a class of algorithms that are independent of spatial resolution!..........





Follow @NuitBlog or join the CompressiveSensing Reddit, the Facebook page, the Compressive Sensing group on LinkedIn  or the Advanced Matrix Factorization group on LinkedIn


Other links:
Paris Machine LearningMeetup.com||@Archives||LinkedIn||Facebook|| @ParisMLGroup About LightOnNewsletter ||@LightOnIO|| on LinkedIn || on CrunchBase || our Blog

No comments:

Printfriendly