# Algorithms for Big Data (COMPSCI 229r), Lecture 12 | Summary and Q&A

2.9K views
July 12, 2016
by
Harvard University
Algorithms for Big Data (COMPSCI 229r), Lecture 12

## TL;DR

Gordon's Theorem provides a way to analyze dimensionality reduction techniques beyond worst-case scenarios, allowing for instance-wise performance evaluation.

## Install to Summarize YouTube Videos and Get Transcripts

### Q: What are the differences between distributional JL (DJL) and metric JL (MJL)?

DJL focuses on finding a distribution that works for any fixed vector with high probability, while MJL aims to preserve distances between vectors in a given set. The two problems have different goals and criteria.

### Q: How can Gordon's Theorem be used to analyze dimensionality reduction techniques?

Gordon's Theorem provides a way to evaluate the performance of dimensionality reduction techniques beyond worst-case scenarios. It takes into account the specific properties of the vectors to be reduced and provides bounds on the number of dimensions required to preserve the set of vectors up to a desired error and failure probability.

### Q: What is the relationship between DJL and Gordon's Theorem?

DJL implies Gordon's Theorem, meaning that if a dimensionality reduction technique satisfies the lower bounds established in DJL, it will also meet the requirements of Gordon's Theorem. This shows the equivalence between the two lower bound problems in terms of analyzing dimensionality reduction techniques.

### Q: How does Gordon's Theorem consider instance-wise scenarios?

Gordon's Theorem allows for an evaluation of dimensionality reduction techniques on a case-by-case basis, taking into account the specific properties of the vectors being reduced. It provides bounds on the number of dimensions required to preserve a given set of vectors in L2 norm, considering different approaches such as nets or worst-case nets.

## Summary & Key Takeaways

• Distributional JL (DJL) and Metric JL (MJL) are two different lower bound problems for dimensionality reduction.

• DJL lower bounds have been established, showing that the number of dimensions required for a good embedding is at least logarithmic in the number of points and inversely proportional to the error and failure probability.

• Gordon's Theorem extends the analysis to instance-wise scenarios, providing bounds on the number of dimensions required to preserve a given set of vectors in L2 norm. Different approaches, such as using nets or worst-case nets, offer different upper bounds on the required number of dimensions.

• DJL implies Gordon's Theorem, demonstrating the equivalence between the two lower bound problems.