The popularization and evolution of parallel computing in the 21st century came in response to processor frequency scaling hitting the power wall. Mapping in parallel computing is used to solve embarrassingly parallel problems by applying a simple operation to all elements of a sequence without requiring communication between the subtasks. Parallel applications are typically classified as either fine-grained parallelism, in which subtasks will communicate several times per second coarse-grained parallelism, in which subtasks do not communicate several times per second or embarrassing parallelism, in which subtasks rarely or never communicate.
Instruction-level parallelism: the hardware approach works upon dynamic parallelism, in which the processor decides at run-time which instructions to execute in parallel the software approach works upon static parallelism, in which the compiler decides which instructions to execute in parallel.Bit-level parallelism: increases processor word size, which reduces the quantity of instructions the processor must execute in order to perform an operation on variables greater than the length of the word.There are generally four types of parallel computing, available from both proprietary and open source parallel computing vendors - bit-level parallelism, instruction-level parallelism, task parallelism, or superword-level parallelism: Parallel computing infrastructure is typically housed within a single datacenter where several processors are installed in a server rack computation requests are distributed in small chunks by the application server that are then executed simultaneously on each server. The primary goal of parallel computing is to increase available computation power for faster application processing and problem solving. Parallel computing refers to the process of breaking down larger problems into smaller, independent, often similar parts that can be executed simultaneously by multiple processors communicating via shared memory, the results of which are combined upon completion as part of an overall algorithm. The HTC cluster test with a parallel SSH approach involved an important reduction of several hours in the processing time of thousands UAV images, especially compared to classic photogrammetric process on a single workstation with commercial software.Ī parallel test, aimed to validate the performance of a single sever of the new HPC cluster, involved really good results halving the processing time with respect to the HTC cluster test.Image from Lawrence Livermore National Laboratory The results are given in terms of performance evaluations based on different computing configurations of the clusters and setups of the steps of the workflow.
This paper presents a photogrammetric workflow based on Free and Open-Source Software (FOSS), which is able to return different outputs and to manage a large amount of data in reasonable time, through the distribution of the most computationally expensive steps on computing clusters hosted by the ReCaS-Bari data center for scientific research. However, the ability to detect remote and wide areas with a very high-resolution is countered by the need to capture large datasets which can limit the photogrammetric process, due to the need for high-performance hardware. Thanks to the potential of SfM algorithm and the development of Unmanned Aerial Vehicles (UAVs) that allow the on-demand acquisition of high-resolution aerial images, it is possible to survey extended areas of the Earth surface and monitor active phenomena through multi-temporal surveys. Specifically, the Structure from Motion (SfM) is an emerging topographic survey technique that addresses the problem of determining the 3D position of image descriptors to estimate three-dimensional structures. Photogrammetry is one of the most reliable techniques to generate high-resolution topographic data and it is key to territorial mapping and change detection analysis of landforms in hydro-geomorphological high-risk areas.