30 April 2018

Announcing Open Images V4 and the ECCV 2018 Open Images Challenge




In 2016, we introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning thousands of object categories. Since its initial release, we've been hard at work updating and refining the dataset, in order to provide a useful resource for the computer vision community to develop new models

Today, we are happy to announce Open Images V4, containing 15.4M bounding-boxes for 600 categories on 1.9M images, making it the largest existing dataset with object location annotations. The boxes have been largely manually drawn by professional annotators to ensure accuracy and consistency. The images are very diverse and often contain complex scenes with several objects (8 per image on average; visualizer).
Annotated images from the Open Images dataset. Left: Mark Paul Gosselaar plays the guitar by Rhys A. Right: Civilization by Paul Downey. Both images used under CC BY 2.0 license.
In conjunction with this release, we are also introducing the Open Images Challenge, a new object detection challenge to be held at the 2018 European Conference on Computer Vision (ECCV 2018). The Open Images Challenge follows in the tradition of PASCAL VOC, ImageNet and COCO, but at an unprecedented scale.

This challenge is unique in several ways:
  • 12.2M bounding-box annotations for 500 categories on 1.7M training images,
  • A broader range of categories than previous detection challenges, including new objects such as “fedora” and “snowman”.
  • In addition to the object detection main track, the challenge includes a Visual Relationship Detection track, on detecting pairs of objects in particular relations, e.g. “woman playing guitar”.
The training set is available now. A test set of 100k images will be released on July 1st 2018 by Kaggle. Deadline for submission of results is on September 1st 2018. We hope that the very large training set will stimulate research into more sophisticated detection models that will exceed current state-of-the-art performance, and that the 500 categories will enable a more precise assessment of where different detectors perform best. Furthermore, having a large set of images with many objects annotated enables to explore Visual Relationship Detection, which is a hot emerging topic with a growing sub-community.

In addition to the above, Open Images V4 also contains 30.1M human-verified image-level labels for 19,794 categories, which are not part of the Challenge. The dataset includes 5.5M image-level labels generated by tens of thousands of users from all over the world at crowdsource.google.com.

Facebook is trying to block Schrems II privacy referral to EU top court


Facebook’s lawyers are attempting to block a High Court decision in Ireland, where its international business is headquartered, to refer a long-running legal challenge to the bloc’s top court.

The social media giant’s lawyers asked the court to stay the referral to the CJEU today, Reuters reports. Facebook is trying to appeal the referral by challenging Irish case law — and wants a stay granted in the meanwhile.

The case relates to a complaint filed by privacy campaigner and lawyer Max Schrems regarding a transfer mechanism that’s currently used by thousands of companies to authorize flows of personal data on EU citizens to the US for processing. Though Schrems was actually challenging the use of so-called Standard Contractual Clauses (SCCs) by Facebook, specifically, when he updated an earlier complaint on the same core data transfer issue — which relates to US government mass surveillance practices, as revealed by the 2013 Snowden disclosures — with Ireland’s data watchdog.

However the Irish Data Protection Commissioner decided to refer the issue to the High Court to consider the legality of SCCs as a whole. And earlier this month the High Court decided to refer a series questions relating to EU-US data transfers to Europe’s top court — seeking a preliminary ruling on a series of fundamental questions that could even unseat another data transfer mechanism, called the EU-US Privacy Shield, depending on what CJEU judges decide.

An earlier legal challenge by Schrems — which was also related to the clash between US mass surveillance programs (which harvest data from social media services) and EU fundamental rights (which mandate that web users’ privacy is protected) — resulted in the previous arrangement for transatlantic data flows being struck down by the CJEU in 2015, after standing for around 15 years.

Hence the current case being referred to by privacy watchers as ‘Schrems II’. You can also see why Facebook is keen to delay another CJEU referral if it can.

According to comments made by Schrems on Twitter the Irish High Court reserved judgement on Facebook’s request today, with a decision expected within a week…

Facebook’s appeal is based on trying to argue against Irish case law — which Schrems says does not allow for an appeal against such a referral, hence he’s couching it as another delaying tactic by the company:

We reached out to Facebook for comment on the case. At the time of writing it had not responded.

In a statement from October, after an earlier High Court decision on the case, Facebook said:

Standard Contract Clauses provide critical safeguards to ensure that Europeans’ data is protected once transferred to companies that operate in the US or elsewhere around the globe, and are used by thousands of companies to do business. They are essential to companies of all sizes, and upholding them is critical to ensuring the economy can continue to grow without disruption.

This ruling will have no immediate impact on the people or businesses who use our services. However it is essential that the CJEU now considers the extensive evidence demonstrating the robust protections in place under Standard Contractual Clauses and US law, before it makes any decision that may endanger the transfer of data across the Atlantic and around the globe.


Read Full Article

Google brings Nvidia’s Tesla V100 GPUs to its cloud


Google today announced that Nvidia’s high-powered Tesla V100 GPUs are now available for workloads on both Compute Engine and Kubernetes Engine. For now, this is only a public beta, but for those who need Google’s full support for their GPU workloads, the company also today announced that the slightly less powerful Nvidia P100 GPUs are now out of beta and publicly available.

The V100 GPUs remain the most powerful chips in Nvidia’s lineup for high-performance computing. They have been around for quite a while now and Google is actually a bit late to the game here. AWS and IBM already offer V100s to their customers and on Azure, they are currently in private preview.

While Google stresses that it also uses NVLink, Nvidia’s fast interconnect for multi-GPU processing, it’s worth noting that its competitors do this, too. NVLink promises GPU-to-GPU bandwidth that’s nine times faster than traditional PCIe connections, resulting in a performance boost of up to 40 percent for some workloads, according to Google.

All of that power comes at a price, of course. An hour of V100 usage costs $2.48, while the P100 will set you back $1.46 per hour (these are the standard prices, with the preemptible machines coming in at half that price). In addition, you’ll also need to pay Google to run your regular virtual machine or containers.

V100 machines are now available in two configurations with either one or eight GPUs, with support for two or four attached GPUs coming in the future. The P100 machines come with one, two or four attached GPUs.


Read Full Article

Twitter also sold data access to Cambridge Analytica researcher


Since it was revealed that Cambridge Analytica improperly accessed the personal data of millions of Facebook users, one question has lingered in the minds of the public: What other data did Dr. Aleksandr Kogan gain access to?

Twitter confirmed to The Telegraph on Saturday that GSR, Kogan’s own commercial enterprise, had purchased one-time API access to a random sample of public tweets from a five-month period between December 2014 and April 2015. Twitter told Bloomberg that, following an internal review, the company did not find any access to private data about people who use Twitter.

Twitter sells API access to large organizations or enterprises for the purposes of surveying sentiment or opinion during various events, or around certain topics or ideas.

Here’s what a Twitter spokesperson said to The Telegraph:

Twitter has also made the policy decision to off-board advertising from all accounts owned and operated by Cambridge Analytica. This decision is based on our determination that Cambridge Analytica operates using a business model that inherently conflicts with acceptable Twitter Ads business practices. Cambridge Analytica may remain an organic user on our platform, in accordance with the Twitter Rules.

Obviously, this doesn’t have the same scope as the data harvested about users on Facebook. Twitter’s data on users is far less personal. Location on the platform is opt-in and generic at that, and users are not forced to use their real name on the platform.

Still, it shows just how broad the Cambridge Analytica data collection was ahead of the 2016 election.

We reached out to Twitter and will update when we hear back.


Read Full Article

Europe eyeing bot IDs, ad transparency and blockchain to fight fakes


European Union lawmakers want online platforms to come up with their own systems to identify bot accounts.

This is as part of a voluntary Code of Practice the European Commission now wants platforms to develop and apply — by this summer — as part of a wider package of proposals it’s put out which are generally aimed at tackling the problematic spread and impact of disinformation online.

The proposals follow an EC-commissioned report last month, by its High-Level Expert Group, which recommended more transparency from online platforms to help combat the spread of false information online — and also called for urgent investment in media and information literacy education, and strategies to empower journalists and foster a diverse and sustainable news media ecosystem.

Bots, fake accounts, political ads, filter bubbles

In an announcement on Friday the Commission said it wants platforms to establish “clear marking systems and rules for bots” in order to ensure “their activities cannot be confused with human interactions”. It does not go into a greater level of detail on how that might be achieved. Clearly it’s intending platforms to have to come up with relevant methodologies.

Identifying bots is not an exact science — as academics conducting research into how information spreads online could tell you. The current tools that exist for trying to spot bots typically involve rating accounts across a range of criteria to give a score of how likely an account is to be algorithmically controlled vs human controlled. But platforms do at least have a perfect view into their own systems, whereas academics have had to rely on the variable level of access platforms are willing to give them.

Another factor here is that given the sophisticated nature of some online disinformation campaigns — the state-sponsored and heavily resourced efforts by Kremlin backed entities such as Russia’s Internet Research Agency, for example — if the focus ends up being algorithmically controlled bots vs IDing bots that might have human agents helping or controlling them, plenty of more insidious disinformation agents could easily slip through the cracks.

That said, other measures in the EC’s proposals for platforms include stepping up their existing efforts to shutter fake accounts and being able to demonstrate the “effectiveness” of such efforts — so greater transparency around how fake accounts are identified and the proportion being removed (which could help surface more sophisticated human-controlled bot activity on platforms too).

Another measure from the package: The EC says it wants to see “significantly” improved scrutiny of ad placements — with a focus on trying to reduce revenue opportunities for disinformation purveyors.

Restricting targeting options for political advertising is another component. “Ensure transparency about sponsored content relating to electoral and policy-making processes,” is one of the listed objectives on its fact sheet — and ad transparency is something Facebook has said it’s prioritizing since revelations about the extent of Kremlin disinformation on its platform during the 2016 US presidential election, with expanded tools due this summer.

The Commission also says generally that it wants platforms to provide “greater clarity about the functioning of algorithms” and enable third-party verification — though there’s no greater level of detail being provided at this point to indicate how much algorithmic accountability it’s after from platforms.

We’ve asked for more on its thinking here and will update this story with any response. It looks to be seeking to test the water to see how much of the workings of platforms’ algorithmic blackboxes can be coaxed from them voluntarily — such as via measures targeting bots and fake accounts — in an attempt to stave off formal and more fulsome regulations down the line.

Filter bubbles also appear to be informing the Commission’s thinking, as it says it wants platforms to make it easier for users to “discover and access different news sources representing alternative viewpoints” — via tools that let users customize and interact with the online experience to “facilitate content discovery and access to different news sources”.

Though another stated objective is for platforms to “improve access to trustworthy information” — so there are questions about how those two aims can be balanced, i.e. without efforts towards one undermining the other. 

On trustworthiness, the EC says it wants platforms to help users assess whether content is reliable using “indicators of the trustworthiness of content sources”, as well as by providing “easily accessible tools to report disinformation”.

In one of several steps Facebook has taken since 2016 to try to tackle the problem of fake content being spread on its platform the company experimented with putting ‘disputed’ labels or red flags on potentially untrustworthy information. However the company discontinued this in December after research suggested negative labels could entrench deeply held beliefs, rather than helping to debunk fake stories.

Instead it started showing related stories — containing content it had verified as coming from news outlets its network of fact checkers considered reputable — as an alternative way to debunk potential fakes.

The Commission’s approach looks to be aligning with Facebook’s rethought approach — with the subjective question of how to make judgements on what is (and therefore what isn’t) a trustworthy source likely being handed off to third parties, given that another strand of the code is focused on “enabling fact-checkers, researchers and public authorities to continuously monitor online disinformation”.

Since 2016 Facebook has been leaning heavily on a network of local third party ‘partner’ fact-checkers to help identify and mitigate the spread of fakes in different markets — including checkers for written content and also photos and videos, the latter in an effort to combat fake memes before they have a chance to go viral and skew perceptions.

In parallel Google has also been working with external fact checkers, such as on initiatives such as highlighting fact-checked articles in Google News and search. 

The Commission clearly approves of the companies reaching out to a wider network of third party experts. But it is also encouraging work on innovative tech-powered fixes to the complex problem of disinformation — describing AI (“subject to appropriate human oversight”) as set to play a “crucial” role for “verifying, identifying and tagging disinformation”, and pointing to blockchain as having promise for content validation.

Specifically it reckons blockchain technology could play a role by, for instance, being combined with the use of “trustworthy electronic identification, authentication and verified pseudonyms” to preserve the integrity of content and validate “information and/or its sources, enable transparency and traceability, and promote trust in news displayed on the Internet”.

It’s one of a handful of nascent technologies the executive flags as potentially useful for fighting fake news, and whose development it says it intends to support via an existing EU research funding vehicle: The Horizon 2020 Work Program.

It says it will use this program to support research activities on “tools and technologies such as artificial intelligence and blockchain that can contribute to a better online space, increasing cybersecurity and trust in online services”.

It also flags “cognitive algorithms that handle contextually-relevant information, including the accuracy and the quality of data sources” as a promising tech to “improve the relevance and reliability of search results”.

The Commission is giving platforms until July to develop and apply the Code of Practice — and is using the possibility that it could still draw up new laws if it feels the voluntary measures fail as a mechanism to encourage companies to put the sweat in.

It is also proposing a range of other measures to tackle the online disinformation issue — including:

  • An independent European network of fact-checkers: The Commission says this will establish “common working methods, exchange best practices, and work to achieve the broadest possible coverage of factual corrections across the EU”; and says they will be selected from the EU members of the International Fact Checking Network which it notes follows “a strict International Fact Checking NetworkCode of Principles”
  • A secure European online platform on disinformation to support the network of fact-checkers and relevant academic researchers with “cross-border data collection and analysis”, as well as benefitting from access to EU-wide data
  • Enhancing media literacy: On this it says a higher level of media literacy will “help Europeans to identify online disinformation and approach online content with a critical eye”. So it says it will encourage fact-checkers and civil society organisations to provide educational material to schools and educators, and organise a European Week of Media Literacy
  • Support for Member States in ensuring the resilience of elections against what it dubs “increasingly complex cyber threats” including online disinformation and cyber attacks. Stated measures here include encouraging national authorities to identify best practices for the identification, mitigation and management of risks in time for the 2019 European Parliament elections. It also notes work by a Cooperation Group, saying “Member States have started to map existing European initiatives on cybersecurity of network and information systems used for electoral processes, with the aim of developing voluntary guidance” by the end of the year.  It also says it will also organise a high-level conference with Member States on cyber-enabled threats to elections in late 2018
  • Promotion of voluntary online identification systems with the stated aim of improving the “traceability and identification of suppliers of information” and promoting “more trust and reliability in online interactions and in information and its sources”. This includes support for related research activities in technologies such as blockchain, as noted above. The Commission also says it will “explore the feasibility of setting up voluntary systems to allow greater accountability based on electronic identification and authentication scheme” — as a measure to tackle fake accounts. “Together with others actions aimed at improving traceability online (improving the functioning, availability and accuracy of information on IP and domain names in the WHOIS system and promoting the uptake of the IPv6 protocol), this would also contribute to limiting cyberattacks,” it adds
  • Support for quality and diversified information: The Commission is calling on Member States to scale up their support of quality journalism to ensure a pluralistic, diverse and sustainable media environment. The Commission says it will launch a call for proposals in 2018 for “the production and dissemination of quality news content on EU affairs through data-driven news media”

It says it will aim to co-ordinate its strategic comms policy to try to counter “false narratives about Europe” — which makes you wonder whether debunking the output of certain UK tabloid newspapers might fall under that new EC strategy — and also more broadly to tackle disinformation “within and outside the EU”.

Commenting on the proposals in a statement, the Commission’s VP for the Digital Single Market, Andrus Ansip, said: Disinformation is not new as an instrument of political influence. New technologies, especially digital, have expanded its reach via the online environment to undermine our democracy and society. Since online trust is easy to break but difficult to rebuild, industry needs to work together with us on this issue. Online platforms have an important role to play in fighting disinformation campaigns organised by individuals and countries who aim to threaten our democracy.”

The EC’s next steps now will be bringing the relevant parties together — including platforms, the ad industry and “major advertisers” — in a forum to work on greasing cooperation and getting them to apply themselves to what are still, at this stage, voluntary measures.

“The forum’s first output should be an EU–wide Code of Practice on Disinformation to be published by July 2018, with a view to having a measurable impact by October 2018,” says the Commission. 

The first progress report will be published in December 2018. “The report will also examine the need for further action to ensure the continuous monitoring and evaluation of the outlined actions,” it warns.

And if self-regulation fails…

In a fact sheet further fleshing out its plans, the Commission states: “Should the self-regulatory approach fail, the Commission may propose further actions, including regulatory ones targeted at a few platforms.”

And for “a few” read: Mainstream social platforms — so likely the big tech players in the social digital arena: Facebook, Google, Twitter.

For potential regulatory actions tech giants only need look to Germany, where a 2017 social media hate speech law has introduced fines of up to €50M for platforms that fail to comply with valid takedown requests within 24 hours for simple cases, for an example of the kind of scary EU-wide law that could come rushing down the pipe at them if the Commission and EU states decide its necessary to legislate.

Though justice and consumer affairs commissioner, Vera Jourova, signaled in January that her preference on hate speech at least was to continue pursuing the voluntary approach — though she also said some Member State’s ministers are open to a new EU-level law should the voluntary approach fail.

In Germany the so-called NetzDG law has faced criticism for pushing platforms towards risk aversion-based censorship of online content. And the Commission is clearly keen to avoid such charges being leveled at its proposals, stressing that if regulation were to be deemed necessary “such [regulatory] actions should in any case strictly respect freedom of expression”.

Commenting on the Code of Practice proposals, a Facebook spokesperson told us: “People want accurate information on Facebook – and that’s what we want too. We have invested in heavily in fighting false news on Facebook by disrupting the economic incentives for the spread of false news, building new products and working with third-party fact checkers.”

A Twitter spokesman declined to comment on the Commission’s proposals but flagged contributions he said the company is already making to support media literacy — including an event last week at its EMEA HQ.

At the time of writing Google had not responded to a request for comment.

Last month the Commission did further tighten the screw on platforms over terrorist content specifically —  saying it wants them to get this taken down within an hour of a report as a general rule. Though it still hasn’t taken the step to cement that hour ‘rule’ into legislation, also preferring to see how much action it can voluntarily squeeze out of platforms via a self-regulation route.

 


Read Full Article

Google at ICLR 2018




This week, Vancouver, Canada hosts the 6th International Conference on Learning Representations (ICLR 2018), a conference focused on how one can learn meaningful and useful representations of data for machine learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.

At the forefront of innovation in cutting-edge technology in neural networks and deep learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2018, Google will have a strong presence with over 130 researchers attending, contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.

If you are attending ICLR 2018, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2018 in the list below (Googlers highlighted in blue)

Senior Program Chairs include:
Tara Sainath

Steering Committee includes:
Hugo Larochelle

Oral Contributions
Wasserstein Auto-Encoders
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Scholkopf

On the Convergence of Adam and Beyond (Best Paper Award)
Sashank J. Reddi, Satyen Kale, Sanjiv Kumar

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, Wei Wang

Beyond Word Importance: Contextual Decompositions to Extract Interactions from LSTMs
W. James Murdoch, Peter J. Liu, Bin Yu

Conference Posters
Boosting the Actor with Dual Critic
Bo Dai, Albert Shaw, Niao He, Lihong Li, Le Song

MaskGAN: Better Text Generation via Filling in the _______
William Fedus, Ian Goodfellow, Andrew M. Dai

Scalable Private Learning with PATE
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson

Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Yujun Lin, Song Han, Huizi Mao, Yu Wang, William J. Dally

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, Roger Grosse

Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models
Adam Roberts, Jesse Engel, Matt Hoffman

Multi-Mention Learning for Reading Comprehension with Neural Cascades
Swabha Swayamdipta, Ankur P. Parikh, Tom Kwiatkowski

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Adams Wei Yu, David Dohan, Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le

Sensitivity and Generalization in Neural Networks: An Empirical Study
Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Action-dependent Control Variates for Policy Optimization via Stein Identity
Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu

An Efficient Framework for Learning Sentence Representations
Lajanugen Logeswaran, Honglak Lee

Fidelity-Weighted Learning
Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, Bernhard Schölkopf

Generating Wikipedia by Summarizing Long Sequences
Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer

Matrix Capsules with EM Routing
Geoffrey Hinton, Sara Sabour, Nicholas Frosst

Temporal Difference Models: Model-Free Deep RL for Model-Based Control
Sergey Levine, Shixiang Gu, Murtaza Dalal, Vitchyr Pong

Deep Neural Networks as Gaussian Processes
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel L. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence at Every Step
William Fedus, Mihaela Rosca, Balaji Lakshminarayanan, Andrew M. Dai, Shakir Mohamed, Ian Goodfellow

Initialization Matters: Orthogonal Predictive State Recurrent Neural Networks
Krzysztof Choromanski, Carlton Downey, Byron Boots

Learning Differentially Private Recurrent Language Models
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

Learning Latent Permutations with Gumbel-Sinkhorn Networks
Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning
Benjamin Eysenbach, Shixiang Gu, Julian IbarzSergey Levine

Meta-Learning for Semi-Supervised Few-Shot Classification
Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Josh Tenenbaum, Hugo Larochelle, Richard Zemel

Thermometer Encoding: One Hot Way to Resist Adversarial Examples
Jacob Buckman, Aurko Roy, Colin Raffel, Ian Goodfellow

A Hierarchical Model for Device Placement
Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. LeJeff Dean

Monotonic Chunkwise Attention
Chung-Cheng Chiu, Colin Raffel

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

Trust-PCL: An Off-Policy Trust Region Method for Continuous Control
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans

Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel

Stochastic Variational Video Prediction
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy Campbell, Sergey Levine

Depthwise Separable Convolutions for Neural Machine Translation
Lukasz Kaiser, Aidan N. Gomez, Francois Chollet

Don’t Decay the Learning Rate, Increase the Batch Size
Samuel L. Smith, Pieter-Jan Kindermans, Chris Ying, Quoc V. Le

Generative Models of Visually Grounded Imagination
Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy

Large Scale Distributed Neural Network Training through Online Distillation
Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E. Dahl, Geoffrey E. Hinton

Learning a Neural Response Metric for Retinal Prosthesis
Nishal P. Shah, Sasidhar Madugula, Alan Litke, Alexander Sher, EJ Chichilnisky, Yoram Singer, Jonathon Shlens

Neumann Optimizer: A Practical Optimization Algorithm for Deep Neural Networks
Shankar Krishnan, Ying Xiao, Rif A. Saurous

A Neural Representation of Sketch Drawings
David HaDouglas Eck

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling
Carlos Riquelme, George Tucker, Jasper Snoek

Generalizing Hamiltonian Monte Carlo with Neural Networks
Daniel Levy, Matthew D. HoffmanJascha Sohl-Dickstein

Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli

On the Discrimination-Generalization Tradeoff in GANs
Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He

A Bayesian Perspective on Generalization and Stochastic Gradient Descent
Samuel L. Smith, Quoc V. Le

Learning how to Explain Neural Networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans, Kristof T. Schütt, Maximilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, Sven Dähne

Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks
Víctor Campos, Brendan Jou, Xavier Giró-i-Nieto, Jordi Torres, Shih-Fu Chang

Towards Neural Phrase-based Machine Translation
Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, Li Deng

Unsupervised Cipher Cracking Using Discrete GANs
Aidan N. Gomez, Sicong Huang, Ivan Zhang, Bryan M. Li, Muhammad Osama, Lukasz Kaiser

Variational Image Compression With A Scale Hyperprior
Johannes Ballé, David Minnen, Saurabh Singh, Sung Jin Hwang, Nick Johnston

Workshop Posters
Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values
Julius Adebayo, Justin Gilmer, Ian Goodfellow, Been Kim

Stoachastic Gradient Langevin Dynamics that Exploit Neural Network Structure
Zachary Nado, Jasper Snoek, Bowen Xu, Roger Grosse, David Duvenaud, James Martens

Towards Mixed-initiative generation of multi-channel sequential structure
Anna Huang, Sherol Chen, Mark J. Nelson, Douglas Eck

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V. Le, Jon Kleinberg

GILBO: One Metric to Measure Them All
Alexander Alemi, Ian Fischer

HoME: a Household Multimodal Environment
Simon Brodeur, Ethan Perez, Ankesh Anand, Florian Golemo, Luca Celotti, Florian Strub, Jean Rouat, Hugo Larochelle, Aaron Courville

Learning to Learn without Labels
Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein

Learning via Social Awareness: Improving Sketch Representations with Facial Feedback
Natasha Jaques, Jesse Engel, David Ha, Fred Bertsch, Rosalind Picard, Douglas Eck

Negative Eigenvalues of the Hessian in Deep Neural Networks
Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol

Realistic Evaluation of Semi-Supervised Learning Algorithms
Avital Oliver, Augustus Odena, Colin Raffel, Ekin Cubuk, lan Goodfellow

Winner's Curse? On Pace, Progress, and Empirical Rigor
D. Sculley, Jasper Snoek, Alex Wiltschko, Ali Rahimi

Meta-Learning for Batch Mode Active Learning
Sachin Ravi, Hugo Larochelle

To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression
Michael Zhu, Suyog Gupta

Adversarial Spheres
Justin Gilmer, Luke Metz, Fartash Faghri, Sam Schoenholz, Maithra Raghu,,Martin Wattenberg, Ian Goodfellow

Clustering Meets Implicit Generative Models
Francesco Locatello, Damien Vincent, Ilya Tolstikhin, Gunnar Ratsch, Sylvain Gelly, Bernhard Scholkopf

Decoding Decoders: Finding Optimal Representation Spaces for Unsupervised Similarity Tasks
Vitalii Zhelezniak, Dan Busbridge, April Shen, Samuel L. Smith, Nils Y. Hammerla

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu Trinh, Quoc Le, Andrew Dai, Thang Luong

Graph Partition Neural Networks for Semi-Supervised Classification
Alexander Gaunt, Danny Tarlow, Marc Brockschmidt, Raquel Urtasun, Renjie Liao, Richard Zemel

Searching for Activation Functions
Prajit Ramachandran, Barret Zoph, Quoc Le

Time-Dependent Representation for Neural Event Sequence Prediction
Yang Li, Nan Du, Samy Bengio

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model
Hieu Pham, Melody Guan, Barret Zoph, Quoc V. Le, Jeff Dean

Intriguing Properties of Adversarial Examples
Ekin Dogus Cubuk, Barret Zoph, Sam Schoenholz, Quoc Le

PPP-Net: Platform-aware Progressive Search for Pareto-optimal Neural Architectures
Jin-Dong Dong, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, Sergey Levine

Learning to Organize Knowledge with N-Gram Machines
Fan Yang, Jiazhong Nie, William W. Cohen, Ni Lao

Online variance-reducing optimization
Nicolas Le Roux, Reza Babanezhad, Pierre-Antoine Manzagol