RT @Reza_Zadeh@twitter.com: A wonderful collection of Art generated by Machine Learning presented at a Workshop. (video credit: gene kogan)
Gallery: nips4creativity.com/
Workshop: nips2017creativity.github.io/

🐦🔗: twitter.com/Reza_Zadeh/status/

RT @StanfordAILab@twitter.com: The 2018 AI Index Annual Report is now out! Lots of amazing data collected by @indexingai@twitter.com led by @yshoham@twitter.com. See the rise of AI and why @Stanford@twitter.com is a great environment to do AI. cdn.aiindex.org/2018/AI%20Inde

🐦🔗: twitter.com/StanfordAILab/stat

RL w Model learning a la @chelseabfinn@twitter.com's work, @hardmaru@twitter.com's World Models, and now this seems like a very promising direction...

"I significantly outperforms the model-free algorithm A3C... while using 50× less environment interaction and a similar amount of computation time"
RT @brandondamos@twitter.com Learning Latent Dynamics for Planning from Pixels
arxiv.org/abs/1811.04551

🐦🔗: twitter.com/brandondamos/statu

A powerful set of graphs indeed. Though, 'survival' is a bit strong; activity is measured by publishing papers, and i'd guess many people (at least in robotics) do end up still working on cutting edge robotics or AI but as founders, research engineers, etc. without publishing
RT @michael_nielsen@twitter.com Remarkable graph showing the survival rate of cohorts in science: pnas.org/content/early/2018/12

🐦🔗: twitter.com/michael_nielsen/st

RT @memotv@twitter.com: When I was a kid, I was a bit of a loner, tucked away in my room. ppl were like "why don't you go out & play with other kids?" & I was like "but it's more fun to compute". (I program my home computer, beam myself into the future) vimeo.com/305436376 twitter.com/memotv/status/1072

🐦🔗: twitter.com/memotv/status/1072

RT @vicariousai@twitter.com: Our new paper learns concepts as programs on a 'visual cognitive computer', showing zero-shot task-transfer on robots, and strong generalization. We bring cogsci ideas about perceptual symbol systems and image schemas into the realm of machine learning. arxiv.org/abs/1812.02788

🐦🔗: twitter.com/vicariousai/status

RT @ammnontet@twitter.com: ok OMINOUS NEURAL NETWORK THAT KNOWS WHAT CERTAIN OBJECTS ARE show me 'orb, hanging out'

🐦🔗: twitter.com/ammnontet/status/1

RT @Meaningness@twitter.com: “You are at the bottom of a deep epistemic well.”

Phil Agre on the existential dilemma of the graduate student: you have no way of knowing that the field you have chosen—because the problem matters and there’s seeming progress—is total nonsense.

🐦🔗: twitter.com/Meaningness/status

RT @WIRED@twitter.com: Imagine you had a great idea. But to share it, you had to invent paper, pen, and ink. That's exactly what Doug Engelbart did—but for him, it was the mouse, keyboard, and screen. wired.trib.al/trhkESs

🐦🔗: twitter.com/WIRED/status/10719

A fun abstract from @GoogleAI@twitter.com 😆

"Zhang et al. argued that understanding DL requires rethinking generalization ... there is no need to rethink anything, since our observations are explained by evaluating the Bayesian evidence in favor of each model."

ai.google/research/pubs/pub466

RT @thinkmariya@twitter.com: “One of the most challenging biases in the industry is the pressure to launch. Just because we *can* build a technology does not mean we should release it,” warns @mmitchell_ai@twitter.com

🐦🔗: twitter.com/thinkmariya/status

ohhhh yes, nothing like some of that GOOD sleep after hitting a deadline (in this case, NDSEG fellowship application yesterday)

Also nothing like the utter chaos of a sleep schedule prior to a deadline...

RT @mark_riedl@twitter.com: If you’re at on Saturday, check out the WordPlay workshop on reinforcement learning for text-adventure games.

Montezuma’s Revenge is solved.

Text-adventure games are harder.

🐦🔗: twitter.com/mark_riedl/status/

RT @RahelJhirad@twitter.com: @chrmanning@twitter.com so many things look right , but also go wrong in vqa

🐦🔗: twitter.com/RahelJhirad/status

"Deep Learning" was a good name substitution at a time NNs were frowned upon (see history overview andreykurenkov.com/writing/ai/), but nowdays it is really getting old... many have advocated for a name change, but until that's finalized I decided to take matters into my own hands :P

RT @u39kun@twitter.com: Compositional Attention Networks for Machine Reasoning
arxiv.org/abs/1803.03067

A very promising approach on injecting structural inductive biases for deep learning models to perform reasoning by Drew Hudson and @chrmanning@twitter.com, @stanfordnlp@twitter.com

🐦🔗: twitter.com/u39kun/status/1071

RT @AndrewYNg@twitter.com: With NeurIPS 2018 on right now, it has been exactly 10 years since we proposed the controversial idea of using CUDA+GPUs for deep learning! h/t @goodfellow_ian@twitter.com who in 2008 helped build our first GPU server in his Stanford dorm. cs.cmu.edu/~dst/NIPS/nips08-wo

🐦🔗: twitter.com/AndrewYNg/status/1

RT @math_rachel@twitter.com: Good overview of the harms of deceptively marketing a puppet as AI, from @sina_lana@twitter.com @skynet_today@twitter.com
skynettoday.com/briefs/sophia

🐦🔗: twitter.com/math_rachel/status

Show more
Scholar Social

A Mastodon instance for academics

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.

We strive to be a safe space for queer people and other minorities, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.

"A Mastodon profile you can be proud to put on the last slide of a presentation at a conference"

"Official" monthly journal club!

(Participation is, of course, optional)

Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.

Any user of Scholar Social can suggest an article by sending the DOI by direct message to @socrates@scholar.social and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.

Read more ...