"Transfusion: Understanding Transfer Learning with Applications to Medical Imaging" https://arxiv.org/abs/1902.00423
The benefits of transfer are nuanced. With *no* feature reuse and only the pretrained weight scaling, we can regain the effects of transfer. More findings in the paper!
Happy our "Deep Learning for Video Game Playing" review article is now published in IEEE Transactions on Games: https://bit.ly/2NdefQ8 PDF: https://arxiv.org/pdf/1708.07902.pdf Lots of new methods added since the last update! @email@example.com @firstname.lastname@example.org @FilipoGiovanni@twitter.com
God bless these graduate students who come up with these! https://twitter.com/tarantulae/status/1098031333241221120
Finding my dog Alpha in the latent space of BigGAN using evolutionary search, inspired by @email@example.com @firstname.lastname@example.org @email@example.com
It's really hard to watch the GPT-2 conversations unfold like so much else in tech. 1/
I think this is the best criticism I've heard of @OpenAI@twitter.com's release. We need to come up with norms for how we decide to publish potentially harmful research. It's unlikely that the best solution is "approach journalists first". https://twitter.com/sherjilozair/status/1097588420308844544
cool visualisations of box convolution https://github.com/shrubb/box-convolutions
Learns the size and offset of the rectangular box instead of filter values (they are fixed and set to 1) as in traditional convolution filters.
***OpenAI Trains Language Model, Mass Hysteria Ensues*** New Post on Approximately Correct digesting the code release debate and media shitstorm.
By the way - I think a valid (if extreme) take on GPT-2 is "lol you need 10,000x the data, 1 billion parameters, and a supercomputer to get current DL models to generalize to Penn Treebank."
I tried to make something not over months/weeks/days, but more like one hour 😲
Speaks for itself. Need to get better at such fast turn around to go back to indulging in photography, more youtube, and other little creative hobbies...
My first RL paper with @DeepMindAI@twitter.com is out! Learning to control the ball-in-cup from vision without explicit modeling. https://arxiv.org/abs/1902.04706.
Today's meta-Twitter summary for machine learning:
None of us have any consensus on what we're doing when it comes to responsible disclosure, dual use, or how to interact with the media.
This should be concerning for us all, in and out of the field.
Are today's object-recognizing neural networks faking it until they make it?
This study shows some models are reliant on texture, and bamboozled when shown just the shapes and outlines of things to identify https://www.theregister.co.uk/2019/02/13/ai_image_texture/
We've released our work on inferring human preferences from the state of the world! By thinking about what "must have happened" in the past, we can infer what should or shouldn't be done.
Blog post: https://bair.berkeley.edu/blog/2019/02/11/learning_preferences/
We developed Reward Learning by Simulating the Past (RLSP), and created a suite of gridworlds to showcase its properties. The top row shows what happens with a misspecified reward, while the bottom shows what happens when using RLSP to correct the reward. (4/4)
Before doing anything intelligent with "AI", do the unintelligent version fast and at scale.
At worst you understand the limits of a simplistic approach and what complexities you need to handle.
At best you realize you don't need the overhead of intelligence.
CS PhD at Stanford Vision Lab. Creator/lead editor http://skynettoday.com 🤖 Into AI, robots, cinema, coding, photography. Russian.
Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.
We strive to be a safe space for queer people and other minorities, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.
"An academic microblog that you can be proud to put on the last slide of a presentation at a conference"
(Participation is, of course, optional)
Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.
Any user of Scholar Social can suggest an article by sending the DOI by direct message to @firstname.lastname@example.org and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.