Log in
As a charitable service-based nonprofit organization (NPO) coordinating individuals, businesses, academia and governments with interests in High Technology, Big Data and Cybersecurity, we bridge the global digital divide by providing supercomputing access, applied research, training, tools and other digital incentives “to empower the underserved and disadvantaged.”

Abstracts and Bios

Keynote Address:

To Compete You Must Compute
Susan L Baldwin - Executive Director Compute Calcul Canada

This presentation will describe Compute Canada, its role as Canada’s national HPC platform and some of the challenges we are facing. The strategic focus for the period 2010-2020 will be discussed, including new avenues of research potential with SMEs and in collaboration with the Rocky Mountain Supercomputing Centres. The HPC world must expand from academia and large companies into the business models and practices of SMEs. How do we enable and encourage this transformation? How can we work together to create competitive advantage, generate wealth, accelerate innovation, and ensure the health and well-being of our citizens?

Susan Baldwin is Compute/Calcul Canada’s first Executive Director, responsible for leading the creation of a powerful national HPC platform for research. She works in collaboration with the university-based HPC consortia to provide for overall architecture and planning, software integration, operations and management and coordination of user support for the national HPC platform. She is responsible for developing and executing the strategic plan for HPC in Canada and securing the necessary funding. Compute Canada is also supporting the Natural Sciences and Engineering Research Council of Canada (NSERC) in the G-8 Exascale Project. In a recent consultation by the Government of Canada on a Digital Economy Strategy, the submission prepared by Ms. Baldwin won the most votes for the best idea.

Prior to taking the Executive Director position, she was the Chief Administrative Officer with CANARIE Inc., involved in the strategic planning and implementation for both advanced broadband networks and advanced applications and was responsible for the management of all funding programs.

For two decades prior to joining CANARIE, Ms. Baldwin held a variety of senior executive positions with various departments within the Government of Canada, including Executive Director, Broadcasting at the CRTC; Director General, Broadcasting Policy at the Department of Canadian Heritage; Director General, New Media with Industry Canada and Director, Research and Technology Policy with the Department of Communications. As Director General, New Media, she was instrumental in proposing and establishing the Information Highway Advisory Council and headed the Executive Secretariat. As a skilled negotiator, she was part of Canadian delegations to numerous international negotiations including the information highway and GATS (General Agreement on trade in services).

Ms. Baldwin holds an Honours BA from York University and a Masters degree from the University of Toronto.


Technology Trends in HPC
Addison Snell - CEO, Intersect360 Research

Addison Snell will present his views on the important technology trends affecting the HPC market today and over the next three to five years. Specific technologies will include GPU accelerators, parallel file systems, and cloud computing, along with choices in interconnects, microprocessors, and operating systems. In his talk, Mr. Snell will draw upon several recent, broad-based studies of users in the HPC industry, including a new, groundbreaking study on HPC adoption drivers and barriers at the low end.

Intersect360 Research is a market advisory and consulting practice focused on the high performance computing industry. The user-driven research methodology of Intersect360 Research is inclusive of application workflows both within and beyond science and engineering and tracks the complete technology spectrum within HPC. Intersect360 Research maintains an exclusive content and research partnership with Tabor Communications, publishers of HPCwire and HPC in the Cloud, and Mr. Snell is co-host of the weekly HPCwire Soundbite podcast.

High Performance Computing for Mortals: There are Drivers & there are Passengers
Earl J. Dodd - Executive Director, Rocky Mountain Supercomputing Centers, Inc.

As we were entertained at this year’s ISC “HPC: Future Technology Building Blocks” session, new approaches will not emerge from evolutionary changes in processor speed and scale from today’s petascale and future exascale systems. These ultrascale systems will require fundamental breakthroughs in hardware technology, programming models, algorithms and software at the system and application levels. And therein lies the issue: What does the vast majority of the world do with our “hand-me-down” terascale cyberinfrastructures? Earl Dodd will address leading-edge, best practices for terascale computing for what the Council on Competitiveness calls the “Missing Middle.” An overview of HPC access via cloud computing models, benchmarking, tools, techniques and skills development will be presentedundefinedgrounding us to what is possible today and helping us mere mortals take benefit of the HPC ecosystem.

Earl Dodd is the Executive Director of the Rocky Mountain Supercomputing Centers, Inc. (RMSC) in Montana. RMSC is a first-of-a-kind public-private partnership that focuses on the HPC Cloud as an economic development engine for Montana and the region. Earl's areas of technical interest are in strategy formulation for Peta/ExaScale architectures and UltraScale 3D visualization and collaboration to drive the next generation computationally-steered workflows and applications. The ability to be “social imagineer” by leading RMSC is a life-long dream come true that is helping build the knowledge economy for Montana. He holds BS and MS degrees in Mining Engineering from Montana College of Mineral Science & Technology ("Montana Tech") and an MBA from Tulane University. Earl has 29 years experience in Supercomputing and High Performance Computing (HPC).

Emerging Technologies and Innovative Solutions for Petroleum Engineering
Ravi Vadapalli, PhD - Research Scientist, Department of Petroleum Engineering, Texas Tech University

The current rate of depletion of hydrocarbon reserves encourages the development of new methods for exploring the partially tapped and untapped reserves at home and abroad. Success in these efforts relies on comprehensive understanding of multiscale aspects of (from microns to miles) reservoir engineering to maximize the productivity and to optimize the management of hydrocarbon reservoirs. High performance computing has become an indispensable tool for prediction of yields through subsurface modeling. Due to the complexity and national importance of this topic, industry-academia-government partnerships play a vital role in these studies. A well trained engineering workforce is necessary to support translation of knowledge into expertise. The proprietary nature of the reservoir data and the lack of robust, secure and reliable data sharing methodologies have limited the scope of such partnerships until now. Grid computing [Plaszczak and Wellner 2006] is an emerging collaborative computing environment that supports both proprietary and role-based security standards.

We have demonstrated [Vadapalli et al. 2008] the promise of grid computing for reservoir modeling using history matching. For this study, Schlumberger's Eclipse software environment and TIGRE [Vadapalli et al. 2007] grid computing environment - funded through the Texas Enterprise Fund (TEF) - was employed. The preliminary results formed as pre-cursors for new collaborative opportunities with Schlumberger Information Solutions (SIS) and its partners and resulted in nearly $45 mil in donations to Texas Tech University (TTU). These efforts led to the creation of the Petroleum Engineering Grid (PEGrid) project and currently several software/hardware vendors, oil/gas companies and universities are taking part in this project.

The main objectives of the PEGrid project are to (1) provide state-of-the-art data security through X.509 based strong authentication and authorization that supports industry-academia partnerships, (2) create computational environment that supports collaborative research in petroleum and other related energy engineering disciplines, and (3) facilitate training the engineering workforce in innovative technologies and state-of-the-art practices in petroleum engineering. In addition, many forms of alternative energy - such as geothermal, wind, and compressed air - rely on subsurface modeling to develop estimates of injection, storage, and withdrawal capacity which directly impact energy yield from these systems. Thus, subsurface modeling has an ever-increasing role to play in predicting resource yield beyond the petroleum industry. In particular, both wind and water resource modeling are of specific interest to the West Texas area and therefore to the PEGrid project.

Dr. Ravi Vadapalli is currently a Research Scientist at the High Performance Computing Center and an Adjunct Professor at the Bob L. Herd Department of Petroleum Engineering, Texas Tech University in Lubbock. He received his Ph.D. in 1994 from Andhra University, Visakhapatnam, India, and M.S. in Computational Engineering in 2002 from Mississippi State University. He is the Co-Founder of the Petroleum Engineering Grid project, and Co-Director of the Center for Multiscale Reservoir Modeling and Characterization at Texas Tech University. He has authored and co-authored over 25 articles in peer-reviewed journals and conferences, and his main areas of research are in parallel and distributed computing, reservoir modeling, cancer radiotherapy, and high performance computing education and outreach. 

Effects of Clouds and Accelerators on the HPC Production Process
Steve Herbert - President and CEO, Nimbix

If we consider the data processed in high performance computing as an output of a production process which has economic benefit to its producers, we can conduct an analysis of its factors of production to determine how they impact output and productivity. While an exhaustive application of production theory is beyond the scope of this presentation, its consideration in modern datacenter computing can provide useful insights about the relative merits of cloud solutions and hardware accelerators in HPC applications.

Cloud computing infrastructure and accelerator-based computing platforms can be taken as alternative inputs in the production process, accompanied by other necessary inputs (power, space, labor) and benchmarked to understand their effect on output. With analysis, one can construct a model to evaluate the merits of alternative computing paradigms in HPC applications.

If clouds and accelerators are considered as technologies applied to the production process, there are tangible effects on both inputs and outputs. Clouds, for example, can transform relatively fixed factors of production over the short run into variable ones while also potentially reducing labor costs. Accelerators produce higher output while minimizing energy consumption and physical space requirements.

There are additional complexities that must be considered in any quantitative or qualitative review of cloud and accelerator technologies, including factor costs. One such complexity is the lack of available application software that is optimized to fully leverage the benefits of the technology relative to ubiquity of x86 and multicore software. From a cloud perspective, there is the issue of workflow integration and workload management. In both cases one must consider the trade-offs of investment in the technology versus increases in productivity.

Nevertheless, to the extent that emerging technology providers can neutralize capital investment barriers in application optimization and workflow integration, thereby reducing technology adoption costs, there is a compelling case to be made that clouds and accelerators will boost productivity of an HPC production process.

Steve Hebert has served in numerous sales, marketing and executive roles in the IT and Semiconductor industries over the last 17 years. Most recently at Altera Corporation, he worked with OEMs leveraging FPGA-based hardware to support initiatives in accelerated computing. Prior to Altera, he held management and consulting roles in the IT sector with a focus on Cisco networking solutions and related technologies. In 2000, Mr. Hebert served as President & CEO of, Inc., a digital media company that deployed cloud-based streaming video technology for corporate communications. He began his career at Texas Instruments’ semiconductor group in Sales & Marketing.
Mr. Hebert earned a Bachelor of Science in Electrical Engineering from Santa Clara University.

It’s SMID All Over Again
Doug McCowan - University of Texas Bureau of Economics Geology

The SIMD style of parallel computing, where many processors execute the same instruction but on different data, flourished back in the days of the Connection Machine and Maspar supercomputers. However, the advent of commodity PC cluster computing, which is actually a kind of MIMD computing, effectively ended it. Now it's back! The commercial appearance of General Purpose Graphics Programming Units (GPGPUs) which are SIMD machines means that, once again, we must seek SIMD algorithms to solve scientific problems on this hardware. Since nature is SIMD, there is substantial hope that this will be a successful quest. We must realize, however, that these new SIMD algorithms may be quite different from those already in use and possibly less efficient.

Current R&D hands on work on pre-exascale HPC systems
Joshua Mora, PhD - Senior Member Technical Staff, Performance Center of Excellence, Advanced Micro Devices Inc

Technical advisor in HPC for manufacturing, F1 car design, and oil & gas markets. Design/validation of HPC (HW+SW) solutions for academia and enterprise. HW: multi socket, multi core, multi chipset, multi rail networking, NAS, IB, GPU SW: Linux/Windows Server for HPC, compilers,debugers,profilers,libraries, scripting, schedulers. Large scale scientific applications development and tunning on state of the art of clusters and supercomputers.

Joshua Mora’s Specialties:
State of the art in software and hardware for high performance computing. development/tuning of distributed memory large scale solvers : factorizations, Krylov, multigrid, preconditioners - Applications/benchmarks: HPCC, HPL, STREAM, IMB, IB, IOZONE, SPEC, FLUENT, STAR[CD,CCM], ACUSOLVE, LSDYNA, POP, WRF, NWCHEM, GROMACS, PAMCRASH, ESI-CFD, ECLIPSE, LAMPS, CPMD, MPIBLAST, HOMME, HIRLAM,