Tutorials

Tutorials will be offered via Zoom at the times indicated below. Attendees must be available at the indicated time; tutorials will not be available for on-demand viewing afterwards.

Tutorial Schedule
Full-Day Tutorials Part I Saturday, 26 September, 05:00 - 09:00 PDT (Los Angeles, Pacific Time)
Saturday, 26 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Saturday, 26 September, 20:00 - 00:00 CST (China Standard Time)
Part II Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)
Half-Day Tutorials Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)

FD-1: Earth Observation Big Data Intelligence: Theory and Practice of Deep Learning and Big Data Mining

Presented by Mihai Datcu, Feng Xu, Akira Hirose

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
In the big data era of earth observation, deep learning and other data mining technologies become critical to successful end applications. Over the past several years, there has been exponentially increasing interests related to deep learning techniques applied to remote sensing including not only hyperspectral imagery but also synthetic aperture radar (SAR) imagery. This tutorial has the following three parts. The first part introduces the basic principles of machine learning, and the evolution to deep learning paradigms. It presents the methods of stochastic variational and Bayesian inference, focusing on the methods and algorithms of deep learning generative adversarial networks. Since the data sets are organic part of the learning process, the EO dataset biases pose new challenges. The tutorial answers to open questions on relative data bias, cross-dataset generalization, for very specific EO cases as multispectral, SAR observation with a large variability of imaging parameters and semantic content. The second part introduces the theory of deep neural networks and the practices of deep learning-based remote sensing applications. It introduces the major types of deep neural networks, the backpropagation algorithms, programming toolboxes, and several examples of deep learning-based remote sensing imagery processing. The last part focuses upon data treatment of and applications to phase and polarization in SAR data. Since SAR is a coherent observation, its data properties are quite special and useful for our social activities to provide us with specific feature extraction and discovery. This part deals with deep learning in complex-amplitude and polarization domains as well as s-called data structurization of such multimodal processing.

FD-2: Machine Learning in Remote Sensing - Theory and Applications for Earth Observation

Presented by Ronny Hänsch, Yuliya Tarabalka, Naoto Yokoya, Andreas Ley

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
Despite the wide and often successful application of machine learning techniques to analyse and interpret remotely sensed data, the complexity, special requirements, as well as selective applicability of these methods often hinders to use them to their full potential. The gap between sensor- and application-specific expertise on the one hand, and a deep insight and understanding of existing machine learning methods on the other hand often leads to suboptimal results, unnecessary or even harmful optimizations, and biased evaluations. The aim of this tutorial is threefold: First, to provide insights and a deep understanding of the algorithmic principles behind state-of-the-art machine learning approaches including Random Forests and Convolutional Networks, feature learning, incremental learning for large-scale/big data remote sensing classification. Second, to illustrate the benefits and limitations of machine learning with practical examples, including providing recommendations about proper preprocessing and initialization (e.g. data normalization), state available sources of data and benchmarks, as well as how to properly generate and sample training data. Third, to inspire new ideas by discussing unusual applications from remote sensing and other domains.

FD-3: Mathematical Morphology in Interpolations and Extrapolations

Presented by B. S. Daya Sagar

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
Data available at multiple spatial / spectral / temporal scales pose numerous challenges to the data scientists. Of late researchers paid wide attention to handle such data acquired through various sensing mechanisms to address intertwined topics—like pattern retrieval, pattern analysis, quantitative reasoning, and simulation and modelling—for better understanding spatiotemporal behaviours of several terrestrial phenomena and processes. Various original algorithms and techniques that are mainly based on mathematical morphology (Matheron 1975, Serra 1982, Soille 2010, Sagar 2010, 2013. 2018) have been developed and demonstrated. This course that presents fundamentals of mathematical morphology and their involvement in interpolations and extrapolations with applications in geosciences and geoinformatics would be useful for those with research interests in image processing and analysis, remote sensing and geosciences, geographical information sciences, spatial statistics and mathematical morphology, mapping of earth-like planetary surfaces, etc. This course will be offered in two parts. In the morning shift all the fundamental morphological transformations would be covered. The applications of those transformations, covered in the first shift, to understand the morphological interpolations and extrapolations would be covered with several case studies in the second shift. Morning Session: Introduction to Mathematical Morphology: (i) Binary Mathematical Morphology, (ii) Grayscale Mathematical Morphology, (iii) Geodesic and Graph Morphology Afternoon Session: Mathematical Morphology in Spatial Interpolations and Extrapolations: (i) Conversion of point-data into polygonal map via SKIZ and WSKIZ, (ii) Visualisation of spatiotemporal behaviour of discrete maps via generation of recursive median elements, (iii) Morphing of grayscale DEMs via morphological interpolations, and (iv) Ranks for pairs of spatial fields via metric based on grayscale morphological distances Bibliography 1. Georges Matheron, 1975, Random Sets and Integral Geometry (New York: John Wiley & Sons). 2. Jean Serra, Image Analysis and Mathematical Morphology, 1982, Academic Press: London, p. 610. 3. B. S. Daya Sagar and Jean Serra, 2010, Preface: Spatial Information Retrieval, Analysis, Reasoning and Modelling, International Journal of Remote Sensing, v. 31, no. 22, p. 5747-5750. 4. Pierre Soille, 2010, Morphological Image Analysis: Principles and Applications, Springer, p. 408. 5. B. S. Daya Sagar, 2013, Mathematical Morphology in Geomorphology and GISci, CRC Press: Boca Raton, p. 546. 6. B. S. Daya Sagar, 2018, Mathematical Morphology in Geosciences and GISci: An Illustrative Review. In: Daya Sagar B., Cheng Q., Agterberg F. (eds) Handbook of Mathematical Geosciences. Springer, Cham DOI: https://doi.org/10.1007/978-3-319-78999-6_35.

FD-4: Natural disasters and hazards monitoring using Earth Observation data

Presented by Ramona Pelich, Marco Chini, Wataru Takeuchi, Young-Joo Kwak and Vitaliy Yurchenko

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
In recent years, natural disasters, i.e., hydro-geo-meteorological hazards and risks, have been frequently experienced by many countries across the globe. 2019 has been another year with numerous devastating disasters hitting several regions. For example, in the Bahamas, Hurricane Dorian caused massive flooding with significant damages, while Japan has been affected by cascading and interacting hazards such as catastrophic mudslides and devastating floods caused by Typhoon Hagibis. As well in 2019, north-east India was suffering badly from monsoon-related flooding and landslides as Ganga and Bagmati Rivers swell up due to heavy rainfall. This tutorial is comprised of basic theoretical and experimental information essential for an emergency hazard and risk mapping process focused on advanced satellite Earth Observation (EO) data including both SAR and Optical data. Firstly, this tutorial gives a better understanding of disaster risk in the early stage by means of EO data available immediately after a disaster occurs. Then, after several comprehensive lectures focused on floods and landslides, a hands-on session will give the opportunity to all participants to learn more about the practical EO tools available for rapid-response information. This full day tutorial will demonstrate the implementation of disaster risk reduction and sustainable monitoring for effective emergency response and management between decision and action activities.

FD-5: Open Source Imaging Spectroscopy: Visualization, Analysis, and Atmospheric Correction

Presented by David Ray Thompson

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
Imaging spectroscopy, also known as Hyperspectral Imaging, is revolutionizing remote sensing. Spectroscopy enables quantitative mapping of materials and chemistry across wide areas. Future orbital missions by NASA and other agencies will provide these data on global scales. This is a sequence of hands-on lab experiences using open source code for imaging spectrometer data analysis. The full day is divided into a morning session for beginners, and an afternoon session dealing with cutting-edge topics for more advanced researchers. The morning session will introduce basic concepts behind these instruments and provide practical experience in visualization and analysis. The tutorials will use the open-source ISOFIT codebase (https://github.com/isofit/isofit) for atmospheric correction, and OpenSPEC for visualization capability similar to that provided in the ENVI interface. The afternoon session will focus on Bayesian methods including atmosphere/surface property estimation with rigorous uncertainty propagation. Topics include: Optimal Estimation (OE) atmospheric correction methods, principled design of model priors and constraints, and formal error analysis. Both sessions are open to all attendees, who can attend any combination in any order as desired. Tutorial materials are also available as open source resources for participants to use in their own courses.

FD-6: Scalable Machine Learning with High Performance and Cloud Computing

Presented by Gabriele Cavallaro, Shahbaz Memon and Rocco Sedona

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
Modern Earth Observation (EO) programs have an open data policy and provide massive volume of free multi-sensor data every day. NASA's Landsat (i.e., the longest running EO program) and ESA's Copernicus provide data with high spectral-spatial coverage at high revisiting time, which enables global monitoring of the Earth in a near real-time manner. Copernicus, with its fleet of Sentinel satellites, is now the World's largest single EO. These programs are showing that the vast amount of raw data available calls for re-definition of the challenges within the entire Remote Sensing (RS) life cycle (i.e., data acquisition, processing, and application phases). It is not by coincidence that RS data are now described under the big data terminology, with characteristics such as volume (increasing scale of acquired/archived data), velocity (rapidly growing data generation rate and real-time processing needs), variety (data acquired from multiple satellites’ sensors that have different spectral, spatial, temporal, and radiometric resolutions), veracity (data uncertainty/ accuracy), and value (extracted information). The large-scale, high-frequency monitoring of the Earth requires robust and scalable Machine Learning (ML) and Deep Learning (DL) models trained over annotated (i.e., not raw) time series of multisensor images at global level (e.g., acquired by Landsat 8 and Sentinel-2). Deep Learning (DL) has already brought crucial achievements in solving RS image classification problems. The state-of-the-art results have been achieved by deep networks with backbones based on convolutional transformations (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs)). Their hierarchical architecture composed of stacked repetitive operations enables the extraction of useful image features from raw pixel data and modelling high-level semantic content of RS images. On the one hand, DL can lead to more accurate classification results of land cover classes when networks are trained over large RS annotated datasets. On the other hand, deep networks pose challenges in terms of training time. In fact, the use of a large datasets for training a DL model requires the availability of non-negligible time resources. In this scenario, approaches relying on local workstation machines (i.e., using MATLAB, R, SAS, SNAP, ENVI, etc.), can provide only limited capabilities. Despite modern commodity computers and laptops becoming more powerful in terms of multi-core configurations and GPU, the limitations in regard to computational power and memory are always an issue when it comes to fast training of large high accuracy models from correspondingly large amounts of data. Therefore, the use of highly scalable and parallel distributed architectures (such as clusters or clouds) is a necessary solution to train DL classifiers in a reasonable amount of time, which can then also provide users with high accuracy performance in the recognition tasks. The tutorial aims at providing a complete overview for an audience that is not familiar with these topics. The tutorial will follow a two-fold approach: from selected background lectures (morning session) needed to practical hands-on exercises (afternoon session) in order to perform own research after the tutorial. The tutorial will discuss the fundamentals of what a supercomputer and a cloud consists of, and how we can take advantage of such systems to solve remote sensing problems that require fast and highly scalable solutions such as realistic real time scenarios.

FD-7: TOPS Sentinel-1 SAR Interferometry for ground motion detection and monitoring

Presented by Dinh Ho Tong Minh

When
Saturday, 26 September, 05:00 - 09:00 and Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 and Monday, 28 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Monday, 28 September, 20:00 - 00:00 and Tuesday, 29 September, 20:00 - 00:00 CST (China Standard Time)
This tutorial explains how to use SAR Interferometry (InSAR) techniques on real-world TOPS Sentinel-1 images, with user-oriented (no coding skills required!) open source software. After a quick summary of SAR and InSAR theory, the tutorial presents how to apply Sentinel-1 SAR data and processing technology to identify and monitor ground deformation.

HD-1: 3D/4D Radar Tomography: concepts, practice and applications

Presented by Fabrizio Lombardini

When
Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)
Thanks to the capability of providing direct physical measurements, synthetic aperture radar (SAR) Interferometry allowing generation of digital elevation models and monitoring displacements to a mm/year order, is one of the techniques that have most pushed the applications of SAR to a wide range of scientific, institutional and commercial areas, and it has provided significant returns to the society in terms of improvements in risk monitoring. SAR images relative to a same scene and suitable for interferometric processing are today available for most of the Earth, and their number is exponentially growing. Archives associated to SAR spaceborne sensors are filled by data collected with time and observation angle diversity (multipass-multibaseline data); moreover, current system trends in the SAR field involve clusters of cooperative formation-flying satellites with capability of multiple simultaneous acquisitions (tandem or multistatic SAR systems), airborne systems with multibaseline acquisition capability in a single pass are also available, and unmanned air vehicles with capability of differential monitoring of rapid phenomena are being experimented. In parallel, processing techniques have been developed, evolutions of the powerful SAR Interferometry, aimed at fully exploiting the information lying in such huge amount of multipass-multibaseline data, to produce new and/or more accurate measuring and information extraction functionalities. Focus of this tutorial is on processing methods that, by coherently combining multiple SAR images at the complex (phase and amplitude) data level, differently from phase-only Interferometry, allow improved or extended imaging and differential monitoring capabilities, in terms of accuracy and unambiguous interpretation of the measurements. The tutorial, along the lines of previous issues but in a renewed format, will cover in particular interrelated techniques that have shaped in the recent years an emerged branch of SAR interferometric remote sensing, Tomographic SAR Imaging and Information Extraction; this is playing an important role in the development of next generation of SAR products and will enhance the application spectrum of SAR systems in Earth observation, in particular for the analysis and monitoring of complex scenarios such as urban/critical infrastructure and forest or more generally volumetric scenes, e.g. ice layers and snowpacks. After briefly recalling the basic concept of SAR Interferometry, multibaseline/multipass Tomographic SAR techniques will be framed, presented, and discussed with respect to the specific applications. These techniques are 1) Multibaseline 3D Tomography, furnishing the functionality of layover scatterers elevation separation, to locate different scatterers interfering in the same pixel in complex surface geometries of man-made structures, causing signal garbling in high frequency SARs, and the functionality of full 3D imaging of volumetric scatterers, to provide a profiling of the scattering distribution also along the elevation direction for unambiguous extraction of physical and geometrical parameters in geophysical structures with vertical stratification, sensed by low frequency SARs; 2) Multipass 4D (3D+Time) and higher order Differential Tomography of multiple layover scatterers with slow deformation motions, a more recent and very promising Multidimensional Imaging mode, crossing the bridge between Differential Interferometry and Multibaseline Tomography. Basic concepts, signal models and most diffused processing techniques for 3D/4D Tomographic SAR Imaging will be described in the array beamforming processing i.e. spatial spectral estimation framework, Fourier based, and of super-resolution kind (adaptive, and model-based). Live demonstration of these Tomographic algorithms and of their behavior will be carried out using simple simulation Matlab codes. A number of experimental results obtained with real data, multibaseline single-pass and multipass airborne, and multipass spaceborne, in X-, C-, L-, and P-band (in particular AER-II, E-SAR, ERS-1/2, COSMO-SkyMed, TerraSAR-X), over infrastructure, urban, forest, and ice areas, will be presented to show current achievements in real cases and the important application potentials of these emerged techniques. Recent new trends in the area will be finally mentioned, including hints to compressive sensing Tomography, and to concepts of higher-order ("5D") Tomography robust to temporal decorrelation and Differential Tomography of non-uniform deformation motions.

HD-2: Analysis-Ready Spatio-Temporal Big Data Cubes: Standards, Tools, Services

Presented by Peter Baumann

When
Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)
Datacubes are emerging as an enabling paradigm for offering massive spatio-temporal Earth data in an analysis-ready way by combining individual files into single, homogenized objects, thereby easing access, extraction, analysis, and fusion. Essentially, datacubes unify spatio-temporal sensor, image (timeseries, simulation, and statistics data under a common modelling and servicing paradigm, independent from the variety of raster encodings utilized. In OGC and ISO standardization, coverages provide the unifying concept for spatio-temporal datacubes, with the streamlined service model of Web Coverage Service (WCS) including Web Coverage Processing Service (WCPS), OGC's geo datacube analytics language. A large, continuously growing number of open-source and proprietary tools support the coverage standards. In this tutorial we present the concept of datacubes, relevant standards, as well as interoperability successes and issues existing. We inspect various implementations and discuss their individual benefits. Based on the OGC reference implementation, rasdaman, live demos accessing existing services and real-life examples which participants can recap and modify on their Internet-connected laptop will play a key role.

HD-3: Crop physiological assessments using high resolution RGB images.

Presented by Shawn C. Kefauver

When
Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)
In this tutorial we will review in a short presentation the state-of-the-art on the use of commercially available consumer color digital cameras, which capture Red, Green and Blue light covering the visible spectrum with broad spectral bands but at high spatial resolution and with accurate color calibration. We will review various RGB vegetation indexes that use the spectral concept for the estimation of biomass and canopy chlorophyll, the Normalized Green Red Difference Index and the Triangular Greenness Index, as well as others that are in popular use based on this same concept. We will also introduce a number of spectral indexes based on alternate color space transforms such as Hue Saturation Intensity (HSI), CIE-Lab and CIE-Luv and their practical calculations. Following this short presentation, we will look at the practical aspects of the calculation of these RGB vegetation indexes using the free software FIJI (FIJI is Just ImageJ) using both the interactive GUI (graphical user interface) of the software and also in code format. Finally, several different software plugin packages including the calculation of several of these RGB vegetation indexes, whether captured using a standard digital camera and processed locally using either the MaizeScanner (https://integrativecropecophysiology.com/software-development/maizescanner/) or the CerealScanner (https://integrativecropecophysiology.com/software-development/cerealscanner/) FIJI plugins developed by the University of Barcelona, or even captured by mobile phone and processed remotely by server application.

HD-4: Predictive Modeling of Hyperspectral Responses of Natural Materials: Challenges and Applications

Presented by Gladimir V. G. Baranoski

When
Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)
Predictive computer models, in conjunction with in situ experiments, are regularly being used by remote sensing researchers to simulate and understand the hyperspectral responses of natural materials (e.g., plants and soils), notably with respect to varying environmental stimuli (e.g., changes in light exposure and water stress). The main purpose of this tutorial is to discuss theoretical and practical issues involved in the development of predictive models of light interactions with these materials, and point out key aspects that need to be addressed to enhance their efficacy. Furthermore, since similar models are used in other scientific domains, such as biophotonics, tissue optics, imaging science and computer graphics, just to name a few, this tutorial also aims to foster the cross-fertilization with related efforts in these fields by identifying common needs and complementary resources. The presentation of this tutorial will be organized into five main sections, which are described as follows. Section 1. This section provides the required background and terminology to be employed throughout the tutorial. It starts with an overview of the main processes involved in the interactions of light with matter. A concise review of relevant optics formulations and radiometry quantities is also provided. We also examine the key concepts of fidelity and predictability, and highlight the requirements and the benefits resulting from their incorporation in applied life sciences investigations. Section 2. It has been long recognized that a carefully designed model is of little use without reliable data. More specifically, the effective use of a model requires material characterization data (e.g., size and water content) to be used as input, supporting data (e.g., absorption spectra of material constituents) to be used during the light transport simulations, and measured radiometric data (e.g., hyperspectral reflectance, transmittance and BSSDF (Bidirectional Surface Scattering Distribution Function)) to be used in the evaluation of modeled results. Besides their relative scarcity, most of measured radiometric datasets available in the literature often provide only a scant description of the material samples employed during the measurements, which makes the used of these datasets as references in comparisons with modeled data problematic. When it comes to a material’s constituents in their pure form, such as pigments, data scarcity is aggravated by other practical issues. For example, oftentimes their absorption spectra is estimated either through inversion procedures, which may be biased by the inaccuracies of the inverted model, or does not take into account in vivo and in vitro discrepancies. In this section, we address these issues and highlight recent efforts to mitigate them. Section 3. For the sake of completeness and correctness, one would like to take into account all of the structural and optical characteristics of a target material during the model design stage. However, even if one is able to fully represent a material in a molecular level, as we outlined above, data may not be available to support such a detailed representation. Hence, researchers need to find an appropriate level of abstraction for the material at hand in order to balance data availability, correctness issues and application requirements. Moreover, no particular modeling design approach is superior in all cases, and regardless of the selected level of abstraction, simplifying assumptions and generalizations are usually employed in the current models due to practical constraints and the inherent complexity of natural materials. In this section, we address these issues and their impact on the efficacy of existing simulation algorithms. Section 4. In order to claim that a model is predictive, one has to provide evidence of its fidelity, i.e., the degree to which it can reproduce the state and behaviour of a real world material in a measurable manner. This makes the evaluation stage essential to determine the predictive capabilities of a given model. In this section, we discuss different evaluation approaches, with a particular emphasis to quantitative and qualitative comparisons of model predictions with actual measured data and/or experimental observations. Although this approach is bound by data availability, it mitigates the presence of biases in the evaluation process and facilitates the identification of model parameters and algorithms that are amenable to modification and correction. In this section, we also discuss the recurrent trade-off involving the pursuit of fidelity and its impact on the performance of simulation algorithms, along with strategies employed to maximize the fidelity/cost ratio of computer intensive models. Section 5. The development of predictive light interaction models offers several opportunities for synergistic collaborations between remote sensing and other scientific domains. For instance, predictive models can provide a robust computational platform for the “in silico” investigation of phenomena that cannot be studied through traditional “wet” experimental procedures. Eventually, these investigations can also lead to the model enhancements. In this final section, we employ case studies to examine this iterative process, which can itself contribute to accelerate the hypothesis generation and validation cycles of research in different fields. We also stress the importance of reproducibility, the cornerstone of scientific advances, and address technical and political barriers that one may need to overcome in order to establish fruitful interdisciplinary collaborations.

HD-5: Remote Sensing with Reflected Global Navigation Satellite System and Signals of Opportunity

Presented by James Garrison and Adriano Camps

When
Sunday, 27 September, 05:00 - 09:00 PDT (Los Angeles, US Pacific Time)
Sunday, 27 September, 14:00 - 18:00 CEST (Central Europe Summer Time)
Sunday, 27 September, 20:00 - 00:00 CST (China Standard Time)
Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture. Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry. GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). Future mission proposals, such as GEROS-ISS (GNSS ReEflectometry, Radio-Occultation and Scatterometry on the International Space Station) and GNSS Transpolar Earth Reflectometry exploriNg System (G-TERN) will demonstrate new GNSS-R measurements of sea surface altimetry and sea ice cover, respectively. Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones to be launched in the coming three years (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R). Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from P-band (230 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp. This all-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms. An outline of the tutorial follows: • Introduction to the GNSS signal structure: Correlation properties of PRN codes; BPSK and BOC modulation; • Models for the reflected GNSS (GNSS-R) signal: Models for rough surface scattering, their limitations, and current attempts to improve upon them. Geometry of the bistatic radar problem. Second-order moments of the reflected signal waveform as a stochastic process. • Geophysical model functions: Ocean height spectrum models and the generation of filtered mean square slope. Models for the slope statistics (e.g. Cox and Munk) and reduction of these models to account for the L-band wavelength of GNSS-R signals. Surface reflection coefficients on land and water, and the relationship to soil moisture and ocean salinity. • Retrieval of geophysical data through inversion of scattering models. Direct inversion of scattering models, to estimate surface roughness from delay-Doppler waveform measurements. Non-linear least squares approaches and their sensitivity. Recent results on full-PDF retrievals. Faster computational methods, including series approximations, waveform peak tracking, and matched filters. Multi-look methods and their limitations. • Power calibration of the reflected signal. • Considerations for Signals of Opportunity: similarities and differences with GNSS-R and early results demonstration geophysical retrievals. • Design of GNSS-R satellite missions