RT @maithra_raghu@twitter.com

"Transfusion: Understanding Transfer Learning with Applications to Medical Imaging" arxiv.org/abs/1902.00423
The benefits of transfer are nuanced. With *no* feature reuse and only the pretrained weight scaling, we can regain the effects of transfer. More findings in the paper!

πŸ¦πŸ”—: twitter.com/maithra_raghu/stat

RT @risi1979@twitter.com

Happy our "Deep Learning for Video Game Playing" review article is now published in IEEE Transactions on Games: bit.ly/2NdefQ8 PDF: arxiv.org/pdf/1708.07902.pdf Lots of new methods added since the last update! @nojustesen@twitter.com @togelius@twitter.com @FilipoGiovanni@twitter.com

πŸ¦πŸ”—: twitter.com/risi1979/status/10

RT @eyaler@twitter.com

Finding my dog Alpha in the latent space of BigGAN using evolutionary search, inspired by @quasimondo@twitter.com @genekogan@twitter.com @kcimc@twitter.com

πŸ¦πŸ”—: twitter.com/eyaler/status/1098

RT @mmitchell_ai@twitter.com

It's really hard to watch the GPT-2 conversations unfold like so much else in tech. 1/

πŸ¦πŸ”—: twitter.com/mmitchell_ai/statu

RT @ryan_t_lowe@twitter.com

I think this is the best criticism I've heard of @OpenAI@twitter.com's release. We need to come up with norms for how we decide to publish potentially harmful research. It's unlikely that the best solution is "approach journalists first". twitter.com/sherjilozair/statu

πŸ¦πŸ”—: twitter.com/ryan_t_lowe/status

RT @ankurhandos@twitter.com

cool visualisations of box convolution github.com/shrubb/box-convolut

Learns the size and offset of the rectangular box instead of filter values (they are fixed and set to 1) as in traditional convolution filters.

πŸ¦πŸ”—: twitter.com/ankurhandos/status

RT @zacharylipton@twitter.com

***OpenAI Trains Language Model, Mass Hysteria Ensues*** New Post on Approximately Correct digesting the code release debate and media shitstorm.
approximatelycorrect.com/2019/

πŸ¦πŸ”—: twitter.com/zacharylipton/stat

RT @AlecRad@twitter.com

By the way - I think a valid (if extreme) take on GPT-2 is "lol you need 10,000x the data, 1 billion parameters, and a supercomputer to get current DL models to generalize to Penn Treebank."

πŸ¦πŸ”—: twitter.com/AlecRad/status/109

I tried to make something not over months/weeks/days, but more like one hour 😲

youtu.be/qdnwHwtQ1k8

Speaks for itself. Need to get better at such fast turn around to go back to indulging in photography, more youtube, and other little creative hobbies...

My darn todo list, always seem to have %d things overdue...

RT @FranceNori@twitter.com

My first RL paper with @DeepMindAI@twitter.com is out! Learning to control the ball-in-cup from vision without explicit modeling. arxiv.org/abs/1902.04706.

πŸ¦πŸ”—: twitter.com/FranceNori/status/

Neat figure from by "Loss is its own Reward:Self-Supervision for Reinforcement Learning" from Evan Shelhamer et al.

The intermingling of representation & policy learning in most DRL strikes me as kinda naive - these are different problems yet are treated as if they are same.

Neat figure from by "Loss is its own Reward:Self-Supervision for Reinforcement Learning" from Evan Shelhamer et al.

The intermingling of representation & policy learning in most DRL strikes me as kinda naive - these are different problems yet are treated as if they are same.

RT @Smerity@twitter.com

Today's meta-Twitter summary for machine learning:
None of us have any consensus on what we're doing when it comes to responsible disclosure, dual use, or how to interact with the media.
This should be concerning for us all, in and out of the field.

πŸ¦πŸ”—: twitter.com/Smerity/status/109

RT @TheRegister@twitter.com

Are today's object-recognizing neural networks faking it until they make it?

This study shows some models are reliant on texture, and bamboozled when shown just the shapes and outlines of things to identify theregister.co.uk/2019/02/13/a

πŸ¦πŸ”—: twitter.com/TheRegister/status

RT @rohinmshah@twitter.com

We've released our work on inferring human preferences from the state of the world! By thinking about what "must have happened" in the past, we can infer what should or shouldn't be done.
Blog post: bair.berkeley.edu/blog/2019/02
Paper: openreview.net/forum?id=rkevMn

(1/4)

πŸ¦πŸ”—: twitter.com/rohinmshah/status/

RT @rohinmshah@twitter.com

We developed Reward Learning by Simulating the Past (RLSP), and created a suite of gridworlds to showcase its properties. The top row shows what happens with a misspecified reward, while the bottom shows what happens when using RLSP to correct the reward. (4/4)

πŸ¦πŸ”—: twitter.com/rohinmshah/status/

RT @Smerity@twitter.com

Before doing anything intelligent with "AI", do the unintelligent version fast and at scale.
At worst you understand the limits of a simplistic approach and what complexities you need to handle.
At best you realize you don't need the overhead of intelligence.

πŸ¦πŸ”—: twitter.com/Smerity/status/109

Show more
Scholar Social

Federated microblogging for academics

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administratorsβ€”anyone involved in academia who is willing to engage with others respectfully.

We strive to be a safe space for queer people and other minorities, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.

"An academic microblog that you can be proud to put on the last slide of a presentation at a conference"

"Official" monthly journal club!

(Participation is, of course, optional)

Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.

Any user of Scholar Social can suggest an article by sending the DOI by direct message to @socrates@scholar.social and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.

Read more ...