|Year of offer||2018|
|Subject level||Graduate coursework|
|Fees||Subject EFTSL, Level, Discipline & Census Date|
The AMSI Summer School is a four-week program hosted at different Australian universities by the Australian Mathematical Sciences Institute. In 2016 it was held at RMIT, in 2017 at Sydney University and in 2018 at Monash. This subject will give students the opportunity to attend subjects taught by eminent lecturers from around Australia, creating an opportunity to study areas of mathematical sciences and cognate disciplines that may not be otherwise available at the University of Melbourne. AMSI Summer School is an exciting opportunity for mathematical sciences students from around Australia to come together over the summer break to develop their skills and networks.
Students can choose from a selection of available modules. No course within the AMSI Summer School program that substantially covers material available in existing University of Melbourne postgraduate Mathematics and Statistics subjects will be available. The School of Mathematics and Statistics determines the subset of allowed modules that University of Melbourne students can choose from.
This subject is only available to students enrolled in the Master of Science (Mathematics and Statistics) or the Graduate Diploma in Science (Advanced) in the Mathematics and Statistics stream.
Available Modules in 2018 (more information is available on the AMSI Summer School website:http://ss.amsi.org.au/courses-lecturers/)
(i) Iterative methods for sparse matrices
Associate Professor Timothy Moroney, Queensland University of Technology
Sparse matrices arise in a great many applications across science, engineering, statistics, business, and beyond. Depending on the application, these matrices may be very large, with many millions of rows and columns. Exploiting the sparsity of these matrices is essential for overcoming the scaling on storage and floating point calculations that would otherwise render even problems with dimensions in the thousands utterly impractical to solve.
Typical operations that are required on large, sparse matrices in practice are:
- Solving linear systems
- Computing eigenvalues and eigenvectors
- Computing functions of a matrix (e.g. the matrix exponential)
A remarkably versatile family of numerical methods called Krylov subspace methods can be applied to all three of these operations, and in doing so require only the action of the matrix in the form of matrix-vector products. By placing such minimal requirements on the means by which a matrix is utilised, these methods facilitate wide-ranging possibilities for how matrices can be stored (or not!) and accessed across modern computational architectures, paving the way for many of today’s high performance codes.
This course will cover Krylov subspace methods for all three problems: linear systems, eigenvalue problems and matrix functions, from their derivation through to efficient numerical implementations.
(ii) Low-dimensional topology
Dr Daniel Mathews & Associate Professor Jessica Purcell, Monash University
The study of low-dimensional topology is the study of spaces of dimensions 2, 3, and 4, including the study of surfaces and their symmetries, knots and links, and structures on 3 and 4-manifolds. It is an area of active research with deep connections to mathematical fields such as geometry and dynamics; it also has modern applications to microbiology, chemistry, and quantum physics. It requires a different set of tools from higher dimensional topology. Perhaps this was first realised when Smale proved the Poincare conjecture in dimensions five and higher in 1961. The proofs in dimensions four (by Freedman in 1982) and three (by Perelman in 2003) required completely new ideas.
In this course we will cover some foundational results of low-dimensional topology. In two dimensions, we will study surfaces, their symmetries, and the mapping class group, proving a beautiful theorem of Lickorish that the mapping class group is generated by Dehn twists (which we will define). In three dimensions, we will study knots – knotted loops in three-dimensional space – and 3-manifolds. We will investigate different ways of describing 3-manifolds, including Heegaard splittings and Dehn fillings, and knot invariants including the Jones and Alexander polynomials. Along the way we will mention some 4-dimensional applications.
(iii) Mathematical relativityand Lorentzian geometry
Dr Andy Hammerlindl & Associate Professor Todd Oliynyk,Monash University
General Relativity is currently our most accurate theory of gravity. It applies across a huge range of physical scales describing the motion of small bodies such as satellites orbiting Earth to the dynamics of supermassive black holes and even our Universe. General Relativity is formulated in the language of Differential Geometry. In this course, we will introduce the background in Differential Geometry needed to understand the fundamental concepts and field equations of General Relativity. Applications of the theory to static black holes, the perihelion precession of Mercury’s orbit and gravitational waves will be discussed as time permits.
(iv) Probabilistic and random graphs
Professor Nick Wormald & Dr Jane Gao, Monash University
The probabilistic method proves the existence of a mathematical structure by showing a random element in an appropriate probability space has the desired properties with positive probability. This might sound simple enough, but books have been written on techniques inspired by the idea. A graph is a combinatorial structure consisting of nodes (vertices) with lines or edges joining them.
The course will introduce some basic techniques of the probabilistic method and give applications in graph theory and combinatorics. In particular, the probabilistic method gives easy proofs of the existence of some graphs (and other objects) which are very hard to construct explicitly, or not known to exist by other methods.
The course will also study random graphs – graphs selected at random from some given probability space. Some appealing properties of these graphs can be proved. They can also be woven into uses of the probabilistic method. Additionally, the methods studied here can give rise to some surprisingly simple and efficient computer algorithms.
Topics to be covered include the following:
- Linearity of expectation and alterations;
- The second moment method;Random graph models and properties;
- The local lemma;
- Martingales and concentration inequalities;
- Branching processes and the size of the giant component;
- Method of moments and random regular graphs;
- Randomised algorithms and derandomization.
(v) Probability, complex analysis and lattice models
Dr Laurie Field, Australian National University & Dr Gregory Markowsky,Monash University
One of the most exciting recent developments in probability theory and mathematical physics has been the discovery of so-called “conformal invariance” in two-dimensional models. Physicists have long conjectured, extrapolating from symmetries under rotation and dilation, that many of their discrete models in the plane would have conformally invariant scaling limits — that is, processes which have the same law after the underlying geometry is mapped in a continuous, angle-preserving way. Recent work by mathematicians has shown this to be correct, as well as identifying the natural limiting process of interfaces in many of these models as the Schramm–Loewner Evolution (SLE). This theory has led to the rigorous determination of critical exponents such as the Brownian intersection exponents and the Hausdorff dimensions of many random planar fractals.
In the first half of this course, we study the most important discrete models that exhibit conformal invariance in the scaling limit, including simple random walk, loop-erased walk, percolation and the Ising model, and discuss what forms of discrete complex analysis can be used to illuminate these models.
In the second half of the course, we pass to the continuum, where Brownian motion becomes a key tool. Lévy’s theorem on conformal invariance of planar Brownian motion yields quick and informative proofs of many facts from complex analysis. Loewner’s theorem in complex analysis describes a non-crossing curve by the differential equation satisfied by the uniformizing conformal map from its complement. By applying Loewner’s differential equation to a Brownian motion on the boundary, we obtain the definition of SLE, derive its first important properties, and heuristically explain why it is the scaling limit of interfaces in the models studied in the first half.
(vi) Statistical machine learning
Note: non-allowed subject combination -COMP90051Statistical Machine Learning
Dr Lamiae Azizi, University of Sydney
Statistical machine learning merges statistics with the computational sciences—computer science, systems science and optimization. Much of the work in statistical machine learning is driven by applied problems in science and technology, where data streams are increasingly large-scale, dynamic and heterogeneous, and where mathematical and algorithmic creativity is required to bring statistical methodology to bear.
In this course we will study how to use probability models to analyze data, focusing on the mathematical details of the models and the algorithms for computing them. We will study both foundations and advanced methods. The goal of the course is to understand modern probabilistic modelling, and develop good practices for specifying and applying probabilistic models to analyze real-world data. The applications we will use range from customer purchases in on-line stores, the modelling of price changes in financial markets, the analysis of the connectivity of genes in biological systems, the discovery of new materials with optimal properties and the design of more efficient hardware.
(vii) Topological data analysis
Dr Vanessa Robins &Dr Katharine Turner, Australian National University
Topological Data Analysis (TDA) is an interdisciplinary field combining methods from algebraic topology, statistics and computational algorithms. It quantifies the shape of data over a full range of length scales and, most importantly, captures how that shape changes as the length scale parameter is varied. Applications are diverse with examples including the quantification of bone morphology and porous materials, the connectivity structure of the brain, and time series analysis.
This course will cover the relevant background from algebraic topology, provide a detailed overview of persistent homology – the main tool in TDA, and various approaches to summarising the informationprovided by persistent homology. Since any data analysis must consider the effect of randomness and of noise we also will study statistical aspects in TDA including stability, correlation, and statistical significance tests. The tutorials and assessment will include working with established software packages to analyse example data.
Intended learning outcomes
After completing this subject students should:
- have gained an understanding at postgraduate level of the mathematics and statistics of the selected topic;
- be familiar with the basic mathematical techniques used in the area of the selected topic;
- appreciate the role of the newly learned mathematical results in the broader context of mathematical sciences and their potential applications to solving real world problems;
- have the ability to pursue further studies in the area of the selected topic and related areas.
In addition to learning specific skills that will assist students in their future careers in science, they will have the opportunity to develop generic skills that will assist them in any future career path. These include:
- problem-solving skills: the ability to engage with unfamiliar problems and identify relevant solution strategies;
- analytical skills: the ability to construct and express logical arguments and to work in abstract or general terms to increase the clarity and efficiency of analysis;
- collaborative skills: the ability to work in a team;
- time-management skills: the ability to meet regular deadlines while balancing competing commitments.