top of page
Search
Writer's pictureService Ventures Team

AI Learning the Designs of a Million Algorithms to Help Build New AIs Faster



The skyrocketing scale of AI has been hard to miss in recent years. The most advanced algorithms now have hundreds of billions of connections, and it takes millions of dollars and a supercomputer to train them. But as eye-catching as big AI is, progress isn’t all about scale - work on the opposite end of the spectrum is just as crucial to the future of the field.


Some researchers are trying to make building AI faster, more efficient, and more accessible, and one area ripe for improvement is the learning process itself. Because AI models and the data sets they feed on have grown exponentially, advanced models can take days or weeks to train, even on supercomputers. Might there be a better way? Perhaps. A new paper published on the preprint server arXiv describes how a type of algorithm called a “hypernetwork” could make the training process much more efficient. The hypernetwork in the study learned the internal connections (or parameters) of a million example algorithms so it could pre-configure the parameters of new, untrained algorithms.



Read more at:




/Service Ventures Team

31 views0 comments

Commenti


bottom of page