Haggai Maron

Haggai Maron

Israel
1K followers 500+ connections

Über uns

I am an assistant professor at the ECE faculty @Technion. I am also a senior research…

Activity

Join now to see all activity

Erleben Sie

  • Technion - Israel Institute of Technology Graphic

    Technion - Israel Institute of Technology

    Haifa, Haifa District, Israel

  • -

    Tel Aviv - Jaffa, Tel Aviv District, Israel

  • -

    Tel Aviv - Jaffa, Tel Aviv District, Israel

  • -

    Israel

  • -

    Tel Aviv - Jaffa, Tel Aviv District, Israel

  • -

  • -

Bildung

  • Weizmann Institute of Science Graphic

    Weizmann Institute of Science

    -

    Computer Science and mathematics, Weizmann Institute of Science. Working under the supervision of Professor Yaron Lipman. Areas of interest are Optimization, Computer Vision, Machine Learning and Computer Graphics.

  • -

    Dean's prize for MSc students in recognition of scientific accomplishments (2015).

  • -

Publications

  • Auxiliary Learning by Implicit Differentiation

    Technical report 2020

    Training with multiple auxiliary tasks is a common practice used in deep learning for improving the performance on the main task of interest. Two main challenges arise in this multi-task learning setting:(i) Designing useful auxiliary tasks; and (ii) Combining auxiliary tasks into a single coherent loss. We propose a novel framework,\textit {AuxiLearn}, that targets both challenges, based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that…

    Training with multiple auxiliary tasks is a common practice used in deep learning for improving the performance on the main task of interest. Two main challenges arise in this multi-task learning setting:(i) Designing useful auxiliary tasks; and (ii) Combining auxiliary tasks into a single coherent loss. We propose a novel framework,\textit {AuxiLearn}, that targets both challenges, based on implicit differentiation. First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function. This network can learn\textit {non-linear} interactions between auxiliary tasks. Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task. We evaluate AuxiLearn in a series of tasks and domains, including image segmentation and learning with attributes. We find that AuxiLearn consistently improves accuracy compared with competing methods.

    Other authors
    See publication
  • Self-Supervised Learning for Domain Adaptation on Point-Clouds

    Technical report 2020

    Self-supervised learning (SSL) allows to learn useful representations from unlabeled data and has been applied effectively for domain adaptation (DA) on images. It is still unknown if and how it can be leveraged for domain adaptation for 3D perception. Here we describe the first study of SSL for DA on point-clouds. We introduce a new pretext task, Region Reconstruction, motivated by the deformations encountered in sim-to-real transformation. We also demonstrate how it can be combined with a…

    Self-supervised learning (SSL) allows to learn useful representations from unlabeled data and has been applied effectively for domain adaptation (DA) on images. It is still unknown if and how it can be leveraged for domain adaptation for 3D perception. Here we describe the first study of SSL for DA on point-clouds. We introduce a new pretext task, Region Reconstruction, motivated by the deformations encountered in sim-to-real transformation. We also demonstrate how it can be combined with a training procedure motivated by the MixUp method. Evaluations on six domain adaptations across synthetic and real furniture data, demonstrate large improvement over previous work.

    See publication
  • Learning Algebraic Multigrid Using Graph Neural Networks

    ICML 2020

    Efficient numerical solvers for sparse linear systems are crucial in science and engineering. One of the fastest methods for solving large-scale sparse linear systems is algebraic multigrid (AMG). The main challenge in the construction of AMG algorithms is the selection of the prolongation operator -- a problem-dependent sparse matrix which governs the multiscale hierarchy of the solver and is critical to its efficiency. Over many years, numerous methods have been developed for this task, and…

    Efficient numerical solvers for sparse linear systems are crucial in science and engineering. One of the fastest methods for solving large-scale sparse linear systems is algebraic multigrid (AMG). The main challenge in the construction of AMG algorithms is the selection of the prolongation operator -- a problem-dependent sparse matrix which governs the multiscale hierarchy of the solver and is critical to its efficiency. Over many years, numerous methods have been developed for this task, and yet there is no known single right answer except in very special cases. Here we propose a framework for learning AMG prolongation operators for linear systems with sparse symmetric positive (semi-) definite matrices. We train a single graph neural network to learn a mapping from an entire class of such matrices to prolongation operators, using an efficient unsupervised loss function. Experiments on a broad class of problems demonstrate improved convergence rates compared to classical AMG, demonstrating the potential utility of neural networks for developing sparse system solvers.

    See publication
  • On Learning Sets of Symmetric Elements

    ICML 2020, Best paper award

    Learning from unordered sets is a fundamental learning setup, which is attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to certain symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction.
    In this paper, we present a…

    Learning from unordered sets is a fundamental learning setup, which is attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to certain symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction.
    In this paper, we present a principled approach to learning sets of general symmetric elements. We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images. We further show that networks that are composed of these layers, called Deep Sets for Symmetric elements layers (DSS), are universal approximators of both invariant and equivariant functions. DSS layers are also straightforward to implement. Finally, we show that they improve over existing set-learning architectures in a series of experiments with images, graphs, and point-clouds.

    See publication
  • Set2Graph: Learning Graphs From Sets

    Technical report 2020

    Many problems in machine learning (ML) can be cast as learning functions from sets to graphs, or more generally to hypergraphs; in short, Set2Graph functions. Examples include clustering, learning vertex and edge features on graphs, and learning triplet data in a collection. Current neural network models that approximate Set2Graph functions come from two main ML sub-fields: equivariant learning, and similarity learning. Equivariant models would be in general computationally challenging or even…

    Many problems in machine learning (ML) can be cast as learning functions from sets to graphs, or more generally to hypergraphs; in short, Set2Graph functions. Examples include clustering, learning vertex and edge features on graphs, and learning triplet data in a collection. Current neural network models that approximate Set2Graph functions come from two main ML sub-fields: equivariant learning, and similarity learning. Equivariant models would be in general computationally challenging or even infeasible, while similarity learning models can be shown to have limited expressive power. In this paper, we suggest a neural network model family for learning Set2Graph functions that is both practical and of maximal expressive power (universal), that is, can approximate arbitrary continuous Set2Graph functions over compact sets. Testing our models on different machine learning tasks, including an application to particle physics, we find them favorable to existing baselines.

    See publication
  • Approximation Power of Invariant Graph Networks‏

    NeurIPS 2019 Graph Representation learning workshop

    Learning graph data is of huge interest to the machine learning community. Recently, graph neural network models motivated by algebraic invariance and equivariance principles have been proposed (Ravanbakhsh et al., 2017; Kondor et al., 2018; Maron et al., 2019b). These models were shown to be universal (Maron et al., 2019c; Keriven and Peyré, 2019), in contrast to the popular message-passing
    models Xu et al. (2019); Morris et al. (2018). In this note, we formulate several open problems…

    Learning graph data is of huge interest to the machine learning community. Recently, graph neural network models motivated by algebraic invariance and equivariance principles have been proposed (Ravanbakhsh et al., 2017; Kondor et al., 2018; Maron et al., 2019b). These models were shown to be universal (Maron et al., 2019c; Keriven and Peyré, 2019), in contrast to the popular message-passing
    models Xu et al. (2019); Morris et al. (2018). In this note, we formulate several open problems aiming at characterizing the trade-off between expressive power and complexity of these models.

    See publication
  • Deep and Convex Shape Analysis

    SIGGRAPH Doctoral Consortium

    In this paper, I review the main results obtained during my Ph.D. studies at
    the Weizmann Institute of Science under the guidance of Professor Yaron
    Lipman. Two fundamental problems in shape analysis were considered: (1)
    how to apply deep learning techniques to geometric objects and (2) how to
    compute meaningful maps between shapes. My work has resulted in several
    novel methods for applying deep learning to surfaces, point clouds, and
    hyper-graphs as well as new efficient…

    In this paper, I review the main results obtained during my Ph.D. studies at
    the Weizmann Institute of Science under the guidance of Professor Yaron
    Lipman. Two fundamental problems in shape analysis were considered: (1)
    how to apply deep learning techniques to geometric objects and (2) how to
    compute meaningful maps between shapes. My work has resulted in several
    novel methods for applying deep learning to surfaces, point clouds, and
    hyper-graphs as well as new efficient techniques to solve relaxations of well-known matching problems. The paper discusses these two problems, surveys
    the suggested solutions, and points out several directions for future work,
    including a promising direction that combines both problems.

    See publication
  • Provably Powerful Graph Networks

    NeurIPS 2019

    Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressive power of graph neural networks (GNN). It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the 1-WL test (Morris et al. 2018; Xu et al. 2019). Unfortunately, many simple instances of graphs are indistinguishable by the 1-WL test.
    In a search for more expressive graph learning models we build upon the recent k-order invariant and…

    Recently, the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressive power of graph neural networks (GNN). It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the 1-WL test (Morris et al. 2018; Xu et al. 2019). Unfortunately, many simple instances of graphs are indistinguishable by the 1-WL test.
    In a search for more expressive graph learning models we build upon the recent k-order invariant and equivariant graph neural networks (Maron et al. 2019a,b) and present two results:
    First, we show that such k-order networks can distinguish between non-isomorphic graphs as good as the k-WL tests, which are provably stronger than the 1-WL test for k>2. This makes these models strictly stronger than message passing models. Unfortunately, the higher expressiveness of these models comes with a computational cost of processing high order tensors.
    Second, setting our goal at building a provably stronger, simple and scalable model we show that a reduced 2-order network containing just scaled identity operator, augmented with a single quadratic operation (matrix multiplication) has a provable 3-WL expressive power. Differently put, we suggest a simple model that interleaves applications of standard Multilayer-Perceptron (MLP) applied to the feature dimension and matrix multiplication. We validate this model by presenting state of the art results on popular graph classification and regression tasks. To the best of our knowledge, this is the first practical invariant/equivariant model with guaranteed 3-WL expressiveness, strictly stronger than message passing models.

    Other authors
    See publication
  • Controlling Neural Level Sets

    NeurIPS 2019

    The level sets of neural networks represent fundamental properties such as decision boundaries of
    classifiers and are used to model non-linear manifold data such as curves and surfaces. Thus, methods for controlling the neural level sets could find many applications in machine learning. In this paper, we present a simple and scalable approach to directly control level sets of a deep neural network. Our method consists of two parts: (i) sampling of the neural level sets, and (ii) relating…

    The level sets of neural networks represent fundamental properties such as decision boundaries of
    classifiers and are used to model non-linear manifold data such as curves and surfaces. Thus, methods for controlling the neural level sets could find many applications in machine learning. In this paper, we present a simple and scalable approach to directly control level sets of a deep neural network. Our method consists of two parts: (i) sampling of the neural level sets, and (ii) relating the samples’ positions to the network parameters. The latter is achieved by a sample network that is constructed by adding a single fixed linear layer to the original network. In turn, the sample network can be used to incorporate the level set samples into a loss function of interest. We have tested our method on three different learning tasks: training networks robust to adversarial attacks, improving generalization to unseen data, and curve and surface reconstruction from point clouds. Notably, we increase robust accuracy to the level of standard classification accuracy in off-the-shelf networks, improving it by 2% in MNIST and 27% in CIFAR10 compared to state-of-the-art methods. For surface reconstruction, we produce high fidelity surfaces directly from raw 3D point clouds.

    Other authors
    See publication
  • On the Universality of Invariant Networks

    International Conference on Machine Learning (ICML) 2019

    Constraining linear layers in neural networks to respect symmetry transformations from a group G is a common design principle for invariant networks that has found many applications in machine learning.
    In this paper, we consider a fundamental question that has received little attention to date: Can these networks approximate any (continuous) invariant function?
    We tackle the rather general case where G≤Sn (an arbitrary subgroup of the symmetric group) that acts on ℝn by permuting…

    Constraining linear layers in neural networks to respect symmetry transformations from a group G is a common design principle for invariant networks that has found many applications in machine learning.
    In this paper, we consider a fundamental question that has received little attention to date: Can these networks approximate any (continuous) invariant function?
    We tackle the rather general case where G≤Sn (an arbitrary subgroup of the symmetric group) that acts on ℝn by permuting coordinates. This setting includes several recent popular invariant networks. We present two main results: First, G-invariant networks are universal if high-order tensors are allowed. Second, there are groups G for which higher-order tensors are unavoidable for obtaining universality.
    G-invariant networks consisting of only first-order tensors are of special interest due to their practical value. We conclude the paper by proving a necessary condition for the universality of G-invariant networks that incorporate only first-order tensors. Lastly, we propose a conjecture stating that this condition is also sufficient.

    Other authors
    See publication
  • Sinkhorn Algorithm for Lifted Assignment Problems

    SIAM Journal On Imaging Sciences

    Recently, Sinkhorn's algorithm was applied for solving regularized linear programs emerging from optimal transport very efficiently. Sinkhorn's algorithm is an efficient method of projecting a positive matrix onto the polytope of doubly-stochastic matrices. It is based on alternating closed-form Bregman projections on the larger polytopes of row-stochastic and column-stochastic matrices.
    In this paper we generalize the Sinkhorn projection algorithm to higher dimensional polytopes originated…

    Recently, Sinkhorn's algorithm was applied for solving regularized linear programs emerging from optimal transport very efficiently. Sinkhorn's algorithm is an efficient method of projecting a positive matrix onto the polytope of doubly-stochastic matrices. It is based on alternating closed-form Bregman projections on the larger polytopes of row-stochastic and column-stochastic matrices.
    In this paper we generalize the Sinkhorn projection algorithm to higher dimensional polytopes originated from well-known lifted linear program relaxations of the Markov Random Field (MRF) energy minimization problem and the Quadratic Assignment Problem (QAP). We derive a closed-form projection on one-sided local polytopes which can be seen as a high-dimensional, generalized version of the row/column-stochastic polytopes. We then use these projections to devise a provably convergent algorithm to solve regularized linear program relaxations of MRF and QAP. Furthermore, as the regularization is decreased both the solution and the optimal energy value converge to that of the respective linear program. The resulting algorithm is considerably more scalable than standard linear solvers and is able to solve significantly larger linear programs.

    Other authors
    See publication
  • Surface Networks via General Covers

    ICCV 2019

    Developing deep learning techniques for geometric data is an active and fruitful research area. This paper tackles the problem of sphere-type surface learning by developing a novel surface-to-image representation. Using this representation we are able to quickly adapt successful CNN models to the surface setting.
    The surface-image representation is based on a covering map from the image domain to the surface. Namely, the map wraps around the surface several times, making sure that every part…

    Developing deep learning techniques for geometric data is an active and fruitful research area. This paper tackles the problem of sphere-type surface learning by developing a novel surface-to-image representation. Using this representation we are able to quickly adapt successful CNN models to the surface setting.
    The surface-image representation is based on a covering map from the image domain to the surface. Namely, the map wraps around the surface several times, making sure that every part of the surface is well represented in the image. Differently from previous surface-to-image representations we provide a low distortion coverage of all surface parts in a single image.
    We have used the surface-to-image representation to apply standard CNN models to the problem of semantic shape segmentation and shape retrieval, achieving state of the art results in both.

    Other authors
    See publication
  • Invariant and Equivariant Graph Networks

    ICLR 2019

    Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant linear layers. Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. In this paper, we provide a characterization of all…

    Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant linear layers. Although this question is answered for the first three examples (for popular transformations, at-least), a full characterization of invariant and equivariant linear layers for graphs is not known. In this paper, we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is 2 and 15, respectively. More generally, for graph data defined on k-tuples of nodes, the dimension is the k-th and 2k-th Bell numbers. Orthogonal bases for the layers are computed, including generalization to multi-graph data. The constant number of basis elements and their characteristics allow successfully applying the networks to different size graphs. From the theoretical point of view, our results generalize and unify recent advancement in equivariant deep learning. Applying these new linear layers in a simple deep neural network framework is shown to achieve comparable results to state-of-the-art and to have better expressivity than previous invariant and equivariant bases.

    Other authors
    See publication
  • (Probably) Concave Graph Matching

    NIPS 2018

    Following {zaslavskiy2009path, Vestner2017} we analyze and generalize the idea of concave relaxations. We introduce the concepts of conditionally concave and probably conditionally concave energies on polytopes and show that they encapsulate many instances of the graph matching problem, including matching Euclidean graphs and graphs on surfaces. We further prove that local minima of probably conditionally concave energies on general matching polytopes (eg, doubly stochastic) are with high…

    Following {zaslavskiy2009path, Vestner2017} we analyze and generalize the idea of concave relaxations. We introduce the concepts of conditionally concave and probably conditionally concave energies on polytopes and show that they encapsulate many instances of the graph matching problem, including matching Euclidean graphs and graphs on surfaces. We further prove that local minima of probably conditionally concave energies on general matching polytopes (eg, doubly stochastic) are with high probability extreme points of the matching polytope (eg, permutations).

    See publication
  • Multi-chart Generative Surface Modeling

    ACM Siggraph Asia 2018

    A new image-like (ie, tensor) data representation for genus-zero 3D shapes is devised. It is based on the observation that complicated shapes can be well represented by multiple parameterizations (charts), each focusing on a different part of the shape. The new tensor data representation is used as input to Generative Adversarial Networks for the task of 3D shape generation. The 3D shape tensor representation is based on a multi-chart structure that enjoys a shape covering property and…

    A new image-like (ie, tensor) data representation for genus-zero 3D shapes is devised. It is based on the observation that complicated shapes can be well represented by multiple parameterizations (charts), each focusing on a different part of the shape. The new tensor data representation is used as input to Generative Adversarial Networks for the task of 3D shape generation. The 3D shape tensor representation is based on a multi-chart structure that enjoys a shape covering property and scale-translation rigidity. Scale-translation rigidity facilitates high quality 3D shape learning and guarantees unique reconstruction. The multi-chart structure uses as input a dataset of 3D shapes (with arbitrary connectivity) and a sparse correspondence between them. The output of our algorithm is a generative model that learns the shape distribution and is able to generate novel shapes, interpolate shapes, and explore the generated shape space. The effectiveness of the method is demonstrated for the task of anatomic shape generation including human body and bone (teeth) shape generation.

    Other authors
    See publication
  • Point Convolutional Neural Networks by Extension Operators

    ACM SIGGRAPH 2018

    This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism.
    The point cloud convolution is computationally efficient, invariant to the order of…

    This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism.
    The point cloud convolution is computationally efficient, invariant to the order of points in the point cloud, robust to different samplings and varying densities, and translation invariant, that is the same convolution kernel is used at all points. PCNN generalizes image CNNs and allows readily adapting their architectures to the point cloud setting.
    Evaluation of PCNN on three central point cloud learning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and/or normals.

    See publication
  • DS++: A Flexible, Scalable and Provably Tight Relaxation for Matching Problems

    ACM SIGGRAPH ASIA 2017

    Correspondence problems are often modelled as quadratic optimization problems over permutations. Common scalable methods for approximating solutions of these NP-hard problems are the spectral relaxation for non-convex energies and the doubly stochastic (DS) relaxation for convex energies. Lately, it has been demonstrated that semidefinite programming relaxations can have considerably improved accuracy at the price of a much higher computational cost. We present a convex quadratic programming…

    Correspondence problems are often modelled as quadratic optimization problems over permutations. Common scalable methods for approximating solutions of these NP-hard problems are the spectral relaxation for non-convex energies and the doubly stochastic (DS) relaxation for convex energies. Lately, it has been demonstrated that semidefinite programming relaxations can have considerably improved accuracy at the price of a much higher computational cost. We present a convex quadratic programming relaxation which is provably stronger than both DS and spectral relaxations, with the same scalability as the DS relaxation. The derivation of the relaxation also naturally suggests a projection method for achieving meaningful integer solutions which improves upon the standard closest-permutation projection. Our method can be easily extended to optimization over doubly stochastic matrices, partial or injective matching, and problems with additional linear constraints. We employ recent advances in optimization of linear-assignment type problems to achieve an efficient algorithm for solving the convex relaxation.
    We present experiments indicating that our method is more accurate than local minimization or competing relaxations for non-convex problems. We successfully apply our algorithm to shape matching and to the problem of ordering images in a grid, obtaining results which compare favorably with state of the art methods. We believe our results indicate that our method should be considered the method of choice for quadratic optimization over permutations.

    See publication
  • Convolutional Neural Networks on Surfaces via Seamless Toric Covers‏

    ACM SIGGRAPH 2017

    The recent success of convolutional neural networks (CNNs) for image processing tasks is inspiring research eforts attempting to achieve similar success for geometric tasks. One of the main challenges in applying CNNs to surfaces is defning a natural convolution operator on surfaces. In this paper we present a method for applying deep learning to sphere type shapes using a global seamless parameterization to a planar flat-torus, for which the convolution operator is well defined. As a result…

    The recent success of convolutional neural networks (CNNs) for image processing tasks is inspiring research eforts attempting to achieve similar success for geometric tasks. One of the main challenges in applying CNNs to surfaces is defning a natural convolution operator on surfaces. In this paper we present a method for applying deep learning to sphere type shapes using a global seamless parameterization to a planar flat-torus, for which the convolution operator is well defined. As a result, the standard deep learning framework can be readily applied for learning semantic, high level properties of the shape. An
    indication of our success in bridging the gap between images and surfaces is the fact that our algorithm succeeds in learning semantic information from an input of raw low-dimensional feature vectors.
    We demonstrate the usefulness of our approach by presenting two applications: human body
    segmentation, and automatic landmark detection on anatomical surfaces. We show that our algorithm compares favorably with competing geometric deep-learning algorithms for segmentation tasks, and is
    able to produce meaningful correspondences on anatomical surfaces where hand-crafted features are bound to fail.

    See publication
  • Point Registration via Efficient Convex Relaxation

    ACM SIGGRAPH 2016

    Point cloud registration is a fundamental task in computer graphics,
    and more specifically, in rigid and non-rigid shape matching.
    The rigid shape matching problem can be formulated as the problem
    of simultaneously aligning and labelling two point clouds in 3D
    so that they are as similar as possible. We name this problem the
    Procrustes matching (PM) problem. The non-rigid shape matching
    problem can be formulated as a higher dimensional PM problem
    using the functional maps…

    Point cloud registration is a fundamental task in computer graphics,
    and more specifically, in rigid and non-rigid shape matching.
    The rigid shape matching problem can be formulated as the problem
    of simultaneously aligning and labelling two point clouds in 3D
    so that they are as similar as possible. We name this problem the
    Procrustes matching (PM) problem. The non-rigid shape matching
    problem can be formulated as a higher dimensional PM problem
    using the functional maps method. High dimensional PM problems
    are difficult non-convex problems which currently can only
    be solved locally using iterative closest point (ICP) algorithms or
    similar methods. Good initialization is crucial for obtaining a good
    solution.
    We introduce a novel and efficient convex SDP (semidefinite programming)
    relaxation for the PM problem. The algorithm is guaranteed
    to return a correct global solution of the problem when
    matching two isometric shapes which are either asymmetric or bilaterally
    symmetric.
    We show our algorithm gives state of the art results on popular
    shape matching datasets. We also show that our algorithm gives
    state of the art results for anatomical classification of shapes. Finally
    we demonstrate the power of our method in aligning shape
    collections.

    See publication
  • Passive Light and Viewpoint Sensitive Display of 3D Content

    International Conference on Computational Photography (ICCP) 2016

    We present a 3D light-sensitive display. The displayis capable
    of presenting simple opaque 3D surfaces without self occlusions,
    while reproducing both viewpoint-sensitive depth
    parallax and illumination-sensitive variations such as shadows
    and highlights. Our display is passive in the sense that
    it does not rely on illumination sensors and on-the-fly rendering
    of the image content. Rather, it consists of optical
    elements that produce light transport paths…

    We present a 3D light-sensitive display. The displayis capable
    of presenting simple opaque 3D surfaces without self occlusions,
    while reproducing both viewpoint-sensitive depth
    parallax and illumination-sensitive variations such as shadows
    and highlights. Our display is passive in the sense that
    it does not rely on illumination sensors and on-the-fly rendering
    of the image content. Rather, it consists of optical
    elements that produce light transport paths approximating
    those present in the real scene.
    Our display uses two layers of Spatial Light Modulators
    (SLMs) whose micron-sized elements allow us to digitally
    simulate thin optical surfaces with flexible shapes. We derive
    a simple content creation algorithm utilizing geometric
    optics tools to design optical surfaces that can mimic the
    ray transfer of target virtual 3D scenes. We demonstrate a
    possible implementation of a small prototype, and present a
    number of simple virtual 3D scenes.

    See publication

Honors & Awards

  • Outstanding paper award, ICML 2020

    International Conference on Machine Learning

    Received the Outstanding paper award, ICML 2020 for the paper "On Learning Sets of Symmetric Elements". The award is given to two papers out of 4990 submissions at the world's top machine learning conference.

  • The Giora Yoel Yashinski Memorial Prize of excellence for Ph.D studies

    Weizmann Institute of Science

    A prize for outstanding students in recognition of accomplishments during the Ph.D.

  • SIGGRAPH 2019 Doctoral Consortium

    ACM SIGGRAPH

    Participant in the SIGGRAPH 2019 Doctoral Consortium

  • Feinberg Graduate School dean prize for MSc studies

    Weizmann Institute of Science

    Recipient of the Feinberg Graduate School dean prize in recognition of academic excellence and scientific accomplishments.

Languages

  • Hebrew

    -

  • Englisch

    -

More activity by Haggai

View Haggai’s full profile

  • See who you know in common
  • Get introduced
  • Contact Haggai directly
Join to view full profile

Other similar profiles

Gemeinsame Artikel erkunden

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses