You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • NeurIPS 2021 • Pascal Esser, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

While VC-dimension does result in trivial generalisation error bounds in this setting as well, we show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for stochastic block models.

no code implementations • 18 Nov 2021 • Leena Chennuru Vankadara, Philipp Michael Faller, Lenon Minorics, Debarghya Ghoshdastidar, Dominik Janzing

Here, we study the problem of *causal generalization* -- generalizing from the observational to interventional distributions -- in forecasting.

no code implementations • 18 Oct 2021 • Leena Chennuru Vankadara, Sebastian Bordt, Ulrike Von Luxburg, Debarghya Ghoshdastidar

Despite the ubiquity of kernel-based clustering, surprisingly few statistical guarantees exist beyond settings that consider strong structural assumptions on the data generation process.

no code implementations • 8 Oct 2021 • Mahalakshmi Sabanayagam, Pascal Esser, Debarghya Ghoshdastidar

This paper focuses on semi-supervised learning on graphs, and explains the above observations through the lens of Neural Tangent Kernels (NTKs).

1 code implementation • 6 Oct 2021 • Mahalakshmi Sabanayagam, Leena Chennuru Vankadara, Debarghya Ghoshdastidar

Using the proposed graph distance, we present two clustering algorithms and show that they achieve state-of-the-art results.

no code implementations • NeurIPS 2020 • Michaël Perrot, Pascal Mattia Esser, Debarghya Ghoshdastidar

The goal of clustering is to group similar objects into meaningful partitions.

no code implementations • 1 Dec 2019 • Leena Chennuru Vankadara, Debarghya Ghoshdastidar

This is the first work that provides such optimality guarantees for the kernel k-means as well as its convex relaxation.

1 code implementation • NeurIPS 2018 • Debarghya Ghoshdastidar, Ulrike Von Luxburg

Hypothesis testing for graphs has been an important tool in applied research fields for more than two decades, and still remains a challenging problem as one often needs to draw inference from few replicates of large graphs.

1 code implementation • NeurIPS 2019 • Debarghya Ghoshdastidar, Michaël Perrot, Ulrike Von Luxburg

We address the classical problem of hierarchical clustering, but in a framework where one does not have access to a representation of the objects or their pairwise similarities.

no code implementations • 4 Jul 2017 • Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

Given a population of $m$ graphs from each model, we derive minimax separation rates for the problem of testing $P=Q$ against $d(P, Q)>\rho$.

no code implementations • 17 May 2017 • Debarghya Ghoshdastidar, Maurilio Gutzeit, Alexandra Carpentier, Ulrike Von Luxburg

We consider a two-sample hypothesis testing problem, where the distributions are defined on the space of undirected graphs, and one has access to only one observation from each model.

no code implementations • 5 Apr 2017 • Siavash Haghiri, Debarghya Ghoshdastidar, Ulrike Von Luxburg

We consider machine learning in a comparison-based setting where we are given a set of points in a metric space, but we have no access to the actual distances between the points.

no code implementations • 21 Feb 2016 • Debarghya Ghoshdastidar, Ambedkar Dukkipati

This work is motivated by two issues that arise when a hypergraph partitioning approach is used to tackle computer vision problems: (i) The uniform hypergraphs constructed for higher-order learning contain all edges, but most have negligible weights.

no code implementations • 7 May 2015 • Debarghya Ghoshdastidar, Ambedkar Dukkipati

Hypergraph partitioning lies at the heart of a number of problems in machine learning and network sciences.

no code implementations • NeurIPS 2014 • Debarghya Ghoshdastidar, Ambedkar Dukkipati

Spectral graph partitioning methods have received significant attention from both practitioners and theorists in computer science.

no code implementations • CVPR 2014 • Debarghya Ghoshdastidar, Ambedkar Dukkipati, Ajay P. Adsul, Aparna S. Vijayan

Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multi-point' kernels, and study their applications.

no code implementations • 21 Jun 2012 • Debarghya Ghoshdastidar, Ambedkar Dukkipati, Shalabh Bhatnagar

This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution.

no code implementations • 3 May 2012 • Ambedkar Dukkipati, Gaurav Pandey, Debarghya Ghoshdastidar, Paramita Koley, D. M. V. Satya Sriram

In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature.

no code implementations • 9 Apr 2012 • Debarghya Ghoshdastidar, Ambedkar Dukkipati

Motivated by the importance of power-law distributions in statistical modeling, in this paper, we propose the notion of power-law kernels to investigate power-laws in learning problem.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.