RT @Reza_Zadeh@twitter.com: A wonderful collection of Art generated by Machine Learning presented at a #NeurIPS Workshop. (video credit: gene kogan)
RT @StanfordAILab@twitter.com: The 2018 AI Index Annual Report is now out! Lots of amazing data collected by @firstname.lastname@example.org led by @email@example.com. See the rise of AI and why @Stanford@twitter.com is a great environment to do AI. #aiindex2018 http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf
RL w Model learning a la @firstname.lastname@example.org's work, @email@example.com's World Models, and now this seems like a very promising direction...
"I significantly outperforms the model-free algorithm A3C... while using 50× less environment interaction and a similar amount of computation time"
RT @firstname.lastname@example.org Learning Latent Dynamics for Planning from Pixels
A powerful set of graphs indeed. Though, 'survival' is a bit strong; activity is measured by publishing papers, and i'd guess many people (at least in robotics) do end up still working on cutting edge robotics or AI but as founders, research engineers, etc. without publishing
RT @email@example.com Remarkable graph showing the survival rate of cohorts in science: https://www.pnas.org/content/early/2018/12/05/1800478115
RT @firstname.lastname@example.org: When I was a kid, I was a bit of a loner, tucked away in my room. ppl were like "why don't you go out & play with other kids?" & I was like "but it's more fun to compute". (I program my home computer, beam myself into the future) #biggan #hypersurrealism https://vimeo.com/305436376 https://twitter.com/memotv/status/1072049396861362178/video/1
RT @email@example.com: Our new paper learns concepts as programs on a 'visual cognitive computer', showing zero-shot task-transfer on robots, and strong generalization. We bring cogsci ideas about perceptual symbol systems and image schemas into the realm of machine learning. https://arxiv.org/abs/1812.02788
RT @firstname.lastname@example.org: ok OMINOUS NEURAL NETWORK THAT KNOWS WHAT CERTAIN OBJECTS ARE show me 'orb, hanging out'
RT @Meaningness@twitter.com: “You are at the bottom of a deep epistemic well.”
Phil Agre on the existential dilemma of the graduate student: you have no way of knowing that the field you have chosen—because the problem matters and there’s seeming progress—is total nonsense.
RT @WIRED@twitter.com: Imagine you had a great idea. But to share it, you had to invent paper, pen, and ink. That's exactly what Doug Engelbart did—but for him, it was the mouse, keyboard, and screen. https://wired.trib.al/trhkESs
RT @email@example.com: after Petrov-Vodkin, one of my favorites
A fun abstract from @GoogleAI@twitter.com 😆
"Zhang et al. argued that understanding DL requires rethinking generalization ... there is no need to rethink anything, since our observations are explained by evaluating the Bayesian evidence in favor of each model."
RT @firstname.lastname@example.org: “One of the most challenging biases in the #tech industry is the pressure to launch. Just because we *can* build a technology does not mean we should release it,” warns @email@example.com #NeurIPS2018 #NeurIPS #AI #ML #bias
RT @firstname.lastname@example.org: If you’re at #NeurIPS2018 on Saturday, check out the WordPlay workshop on reinforcement learning for text-adventure games.
Montezuma’s Revenge is solved.
Text-adventure games are harder.
RT @RahelJhirad@twitter.com: #NeurIPS0218 @email@example.com so many things look right , but also go wrong in vqa
"Deep Learning" was a good name substitution at a time NNs were frowned upon (see history overview http://www.andreykurenkov.com/writing/ai/a-brief-history-of-neural-nets-and-deep-learning-part-4/), but nowdays it is really getting old... many have advocated for a name change, but until that's finalized I decided to take matters into my own hands :P
RT @firstname.lastname@example.org: Compositional Attention Networks for Machine Reasoning
A very promising approach on injecting structural inductive biases for deep learning models to perform reasoning by Drew Hudson and @email@example.com, @firstname.lastname@example.org
RT @AndrewYNg@twitter.com: With NeurIPS 2018 on right now, it has been exactly 10 years since we proposed the controversial idea of using CUDA+GPUs for deep learning! h/t @email@example.com who in 2008 helped build our first GPU server in his Stanford dorm. http://www.cs.cmu.edu/~dst/NIPS/nips08-workshop/
RT @firstname.lastname@example.org: Good overview of the harms of deceptively marketing a puppet as AI, from @email@example.com @firstname.lastname@example.org
CS PhD at Stanford Vision Lab. Creator/lead editor http://skynettoday.com 🤖 Into AI, robots, cinema, coding, photography. Russian.
Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.
We strive to be a safe space for queer people and other minorities, recognizing that there can only be academic freedom where the existence and validity of interlocutors' identities is taken as axiomatic.
"A Mastodon profile you can be proud to put on the last slide of a presentation at a conference"
(Participation is, of course, optional)
Scholar Social features a monthly "official" journal club, in which we try to read and comment on a paper of interest.
Any user of Scholar Social can suggest an article by sending the DOI by direct message to @email@example.com and one will be chosen by random lottery on the last day of the month. We ask that you only submit articles that are from *outside* your own field of study to try to ensure that the papers we read are accessible and interesting to non-experts.