View Online Version | AMD Embedded Solutions | December 2012 | |||||||||||||||||||||||||||||||||||||||
|
Tuesday, 18 December 2012
AMD News Letter
Saturday, 15 December 2012
CUDA: WEEK IN REVIEW
CUDA SPOTLIGHT | |||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||
CUDA NEWS | |||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||
GPU THESIS WATCH | |||||||||||||||||||||||||||||||||
Title: All-Pairs Shortest Path Algorithms Using CUDA Author: Jeremy M. Kemp, Durham University Advisor: Professor Iain Stewart Dept: School of Engineering & Computing Sciences | |||||||||||||||||||||||||||||||||
CUDA JOB OF THE WEEK | back to the top | ||||||||||||||||||||||||||||||||
The Honda Research Institute USA seeks talented candidates to conduct research on vision-based driver assistance systems. Requirements include strong skills in C/C++ and CUDA. Contact fulltime@honda-ri (dot) com (with job #P11F05 in subject line). | |||||||||||||||||||||||||||||||||
FROM THE BLOGOSPHERE | back to the top | ||||||||||||||||||||||||||||||||
New on the Parallel Forall Blog: How to Overlap Data Transfers in CUDA Fortran, by Greg Ruetsch How to Optimize Data Transfers in CUDA Fortran, by Greg Ruetsch How to Optimize Data Transfers in CUDA C++, by Mark Harris (Subscribe to the Parallel Forall RSS feed) | |||||||||||||||||||||||||||||||||
New on the NVIDIA blog: How Gaming PCs Can Help In the Battle Against AIDS, by George Millington GPU Startup Story: Fuzzy Logix Brings Clarity to Analytics, by Gary Rainville | |||||||||||||||||||||||||||||||||
GPU MEETUPS | back to the top | ||||||||||||||||||||||||||||||||
Find a GPU Meetup in your location, or start one up. Upcoming meetings include: Paris, Dec. 18 New York, Dec. 20 Paris, Jan. 15 Brisbane, Jan. 24 New York, Jan. 24 Silicon Valley, Jan. 28 | |||||||||||||||||||||||||||||||||
CUDA CALENDAR | back to the top | ||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||
CUDA RESOURCES | back to the top | ||||||||||||||||||||||||||||||||
|
Thursday, 13 December 2012
OpenCL Specification Versions
OpenCL 1.2
OpenCL 1.2 includes significant new functionality including:
The new OpenCL 1.2 specification released on November 15th 2011, provides enhanced performance and functionality in response to requests from the developer community – while retaining backwards compatibility with OpenCL 1.0 and 1.1. New features in OpenCL 1.2 include seamless sharing of media and surfaces with DirectX® 9 and 11, enhanced image support, custom devices and kernels, device partitioning and separate compilation and linking of objects.
- The OpenCL 1.2 specification and header files are available in the Khronos Registry
- The OpenCL 1.2 Quick Reference card ( View online ).
- The OpenCL 1.2 Online Man pages.
OpenCL 1.1
OpenCL 1.1 includes significant new functionality including:
- Host-thread safety, enabling OpenCL commands to be enqueued from multiple host threads;
- Sub-buffer objects to distribute regions of a buffer across multiple OpenCL devices;
- User events to enable enqueued OpenCL commands to wait on external events;
- Event callbacks that can be used to enqueue new OpenCL commands based on event state changes in a non-blocking manner;
- 3-component vector data types;
- Global work-offset which enable kernels to operate on different portions of the NDRange;
- Memory object destructor callback;
- Read, write and copy a 1D, 2D or 3D rectangular region of a buffer object;
- Mirrored repeat addressing mode and additional image formats;
- New OpenCL C built-in functions such as integer clamp, shuffle and asynchronous strided copies;
- Improved OpenGL interoperability through efficient sharing of images and buffers by linking OpenCL event objects to OpenGL fence sync objects;
- Optional features in OpenCL 1.0 have been bought into core OpenCL 1.1 including: writes to a pointer of bytes or shorts from a kernel, and conversion of atomics to 32-bit integers in local or global memory.
- The OpenCL 1.1 specification and header files are available in the Khronos Registry
- The OpenCL 1.1 Quick Reference card ( View online ).
- The OpenCL 1.1 Online Man pages.
OpenCL 1.0
OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems. OpenCL provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems and handheld devices using a diverse mix of multi-core CPUs, GPUs, Cell-type architectures and other parallel processors such as DSPs.
- The OpenCL 1.0 specification and header files are available in the Khronos Registry
- The OpenCL 1.0 Quick Reference card ( View online ).
- The OpenCL 1.0 Online Man pages.
Monday, 10 December 2012
OpenCL Studio 2.0 released
OpenCL Studio 2.0 released
OpenCL Studio integrates OpenCL and OpenGL into a single
development environment for high performance computing. The feature rich
editor, interactive scripting language and extensible plug-in
architecture support the rapid development of complex parallel
algorithms and accompanying visualizations. Version 2.0 now conforms to
the Lua plug-in architecture and closely integrates the open-source
libCL parallel algorithm library. A complete version of OpenCL Studio is
freely available for download at www.opencldev.com, including instructional videos and technology showcases.
New CLOGS library with sort and scan primitives for OpenCL
CLOGS is a library for higher-level operations on top of the
OpenCL C++ API. It is designed to integrate with other OpenCL code,
including synchronization using OpenCL events. Currently only two
operations are supported: radix sorting and exclusive scan. Radix sort
supports all the unsigned integral types as keys, and all the built-in
scalar and vector types suitable for storage in buffers as values. Scan
supports all the integral types. It also supports vector types, which
allows for limited multi-scan capabilities.
Version 1.0 of the library has just been released. The home page is http://clogs.sourceforge.net/
Version 1.0 of the library has just been released. The home page is http://clogs.sourceforge.net/
OpenCL SDK for new Intel Core Processors
The Intel® SDK for OpenCL Applications now supports the OpenCL
1.1 full-profile on 3rd generation Intel® Core™ processors with Intel®
HD Graphics 4000/2500. For the first time, OpenCL developers using
Intel® architecture can utilize compute resources across both Intel®
Processor and Intel HD Graphics. More information: http://software.intel.com/en-us/articles/vcsource-tools-opencl-sdk
VexCL: Vector expression template library for OpenCL
VexCL is vector expression template library for OpenCL developed
by the Supercomputer Center of Russian academy of sciences. It has been
created for ease of C++ based OpenCL development. Multi-device (and
multi-platform) computations are supported. The code is publicly available under MIT license.
Main features:
Main features:
- Selection and initialization of compute devices according to extensible set of device filters.
- Transparent allocation of device vectors spanning multiple devices.
- Convenient notation for vector arithmetic, sparse matrix-vector multiplication, reductions. All computations are performed in parallel on all selected devices.
- Appropriate kernels for vector expressions are generated automatically first time an expression is used.
SnuCL – OpenCL heterogeneous cluster computing
SnuCL
is an OpenCL framework and freely available, open-source software
developed at Seoul National University. It naturally extends the
original OpenCL semantics to the heterogeneous cluster environment. The
target cluster consists of a single host node and multiple compute
nodes. They are connected by an interconnection network, such as Gigabit
and InfiniBand switches. The host node contains multiple CPU cores and
each compute node consists of multiple CPU cores and multiple GPUs. For
such clusters, SnuCL provides an illusion of a single heterogeneous
system for the programmer. A GPU or a set of CPU cores becomes an OpenCL
compute device. SnuCL allows the application to utilize compute devices
in a compute node as if they were in the host node. Thus, with SnuCL,
OpenCL applications written for a single heterogeneous system with
multiple OpenCL compute devices can run on the cluster without any
modifications. SnuCL achieves both high performance and ease of
programming in a heterogeneous cluster environment.
SnuCL consists of SnuCL runtime and compiler. The SnuCL compiler is based on the OpenCL C compiler in SNU-SAMSUNG OpenCL framework. Currently, the SnuCL compiler supports x86, ARM, and PowerPC CPUs, AMD GPUs, and NVIDIA GPUs.
SnuCL consists of SnuCL runtime and compiler. The SnuCL compiler is based on the OpenCL C compiler in SNU-SAMSUNG OpenCL framework. Currently, the SnuCL compiler supports x86, ARM, and PowerPC CPUs, AMD GPUs, and NVIDIA GPUs.
Virtual OpenCL (VCL) Cluster Platform 1.14 released
The MOSIX group announces the release of the Virtual OpenCL
(VCL) cluster platform version 1.14. This version includes the SuperCL
extension that allows micro OpenCL programs to run efficiently on
devices of remote nodes. VCL provides an OpenCL platform in which all
the cluster devices are seen as if they are located in the hosting-node.
This platform benefits OpenCL applications that can use many devices
concurrently. Applications written for VCL benefit from the reduced
programming complexity of a single computer, the availability of
shared-memory, multi-threads and lower granularity parallelism, as well
as concurrent access to devices in many nodes. With SuperCL, a
programmable sequence of kernels and/or memory operations can be sent to
remote devices in cluster nodes, usually with just a single network
round-trip. SuperCL also offers asynchronous communication with the
host, to avoid the round-trip waiting time, as well as direct access to
distributed file-systems. The VCL package can be downloaded from mosix.org.
CLU Runtime and Code Generator
The Computing Language Utility (CLU)
is a lightweight API designed to help programmers explore, learn, and
rapidly prototype programs with OpenCL. This API reduces the complexity
associated with initializing OpenCL devices, contexts, kernels and
parameters, etc. while preserving the ability to drop down to the lower
level OpenCL API at will when programmers wants to get their hands
dirty. The CLU release includes an open source implementation along with
documentation and samples that demonstrate how to use CLU in real
applications. It has been tested on Windows 7 with Visual Studio.
AMD CodeXL: comprehensive developer tool suite for heterogeneous compute
AMD CodeXL is a new unified developer tool suite that enables developers to harness the benefits of CPUs, GPUs and APUs. It includes powerful GPU debugging, comprehensive GPU and CPU profiling, and static OpenCL™ kernel analysis capabilities, enhancing accessibility for software developers to enter the era of heterogeneous computing. AMD CodeXL is available for free, both as a Visual Studio® extension and a standalone user interface application for Windows® and Linux®.
AMD CodeXL increases developer productivity by helping them identify
programming errors and performance issues in their application quickly
and easily. Now developers can debug, profile and analyze their
applications with a full system-wide view on AMD APU, GPU and CPUs.
AMD CodeXL user group (requires registration) allows users to interact with the CodeXL team, provide feedback, get support and participate in the beta surveys.
AMD CodeXL user group (requires registration) allows users to interact with the CodeXL team, provide feedback, get support and participate in the beta surveys.
Webinar: Portability, Scalability, and Numerical Stability in Accelerated Kernels
Seeing speedups of an accelerated application is great, but what
does it take to build a codebase that will last for years and across
architectures? In this webinar, John Stratton will cover some of the
insights gained at the University of Illinois at Urbana-Champaign from
experience with computer architecture, programming languages, and
application development.
The webinar will offer three main conclusions including:
GPU Computing: Past, Present and Future
http://developer.download.nvidia.com/CUDA/training/GTC_Express_David_Luebke_June2011.pdf
The webinar will offer three main conclusions including:
- Performance portability should be more achievable than many people think.
- The number one performance-limiting factor now and in the future will be parallel scalability.
- As much as we care about performance, general libraries that will last have to be reliable as well as fast.
GPU Computing: Past, Present and Future
http://developer.download.nvidia.com/CUDA/training/GTC_Express_David_Luebke_June2011.pdf
CUDA 5 Production Release Now Available
The CUDA 5 Production Release is now available as a free download at www.nvidia.com/getcuda.
This powerful new version of the pervasive CUDA parallel computing
platform and programming model can be used to accelerate more of
applications using the following four (and many more) new features.
• CUDA Dynamic Parallelism brings GPU acceleration to new algorithms by enabling GPU threads to directly launch CUDA kernels and call GPU libraries.
• A new device code linker enables developers to link external GPU code and build libraries of GPU functions.
• NVIDIA Nsight Eclipse Edition enables you to develop, debug and optimize CUDA code all in one IDE for Linux and Mac OS.
• GPUDirect Support for RDMA provides direct communication between GPUs in different cluster nodes
As a demonstration of the power of Dynamic Parallelism and device code linking, CUDA 5 includes a device-callable version of the CUBLAS linear algebra library, so threads already running on the GPU can invoke CUBLAS functions on the GPU.
CUDA5 everything you need to know see the pdf
http://developer.download.nvidia.com/GTC/cuda5-everything-you-need-to-know.pdf
• CUDA Dynamic Parallelism brings GPU acceleration to new algorithms by enabling GPU threads to directly launch CUDA kernels and call GPU libraries.
• A new device code linker enables developers to link external GPU code and build libraries of GPU functions.
• NVIDIA Nsight Eclipse Edition enables you to develop, debug and optimize CUDA code all in one IDE for Linux and Mac OS.
• GPUDirect Support for RDMA provides direct communication between GPUs in different cluster nodes
As a demonstration of the power of Dynamic Parallelism and device code linking, CUDA 5 includes a device-callable version of the CUBLAS linear algebra library, so threads already running on the GPU can invoke CUBLAS functions on the GPU.
CUDA5 everything you need to know see the pdf
http://developer.download.nvidia.com/GTC/cuda5-everything-you-need-to-know.pdf
Webinar: Learn How GPU-Accelerated Applications Benefit Academic Research
GPUs have become a corner stone of computational research in high performance computing with over 200 commonly used applications already GPU-enabled. Researchers across many domains, such as Computational Chemistry, Biology, Weather & Climate, and Engineering, are using GPU-accelerated applications to greatly reduce time to discovery by achieving results that were simply not possible before.
Join Devang Sachdev, Sr. Product Manager, NVIDIA for an overview of the most popular applications used in academic research and an account of success stories enabled by GPUs. Learn also about a complimentary program which allows researchers to easily try GPU-accelerated applications on a remotely hosted cluster or Amazon AWS cloud.
Register at http://www.gputechconf.com/page/gtc-express-webinar.html.
OpenCL CodeBench Eclipse Code Creation Tools
OpenCL CodeBench
is a code creation and productivity tools suite designed to accelerate
and simplify OpenCL software development. OpenCL CodeBench provides
developers with automation tools for host code and unit test bench
generation. Kernel code development on OpenCL is accelerated and
enhanced through a language aware editor delivering advanced incremental
code analysis features. Software Programmers new to OpenCL can choose
to be guided through an Eclipse wizard, while the power users can
leverage the command line interface with XML-based configuration files. OpenCL CodeBench Beta is now available for Linux and Windows operating systems.
Sixth Workshop on General Purpose Processing Using GPUs (GPGPU6)
The Sixth Workshop on General Purpose Processing Using GPUs (GPGPU6) is held in conjunction with ASPLOS XVIII, Houston, TX, March 17, 2013.
Overview: The goal of this workshop is to provide a forum to discuss new and emerging general-purpose purpose programming environments and platforms, as well as evaluate applications that have been able to harness the horsepower provided by these platforms. This year’s work is particularly interested on new heterogeneous GPU platforms. Papers are being sought on many aspects of GPUs, including (but not limited to):
Overview: The goal of this workshop is to provide a forum to discuss new and emerging general-purpose purpose programming environments and platforms, as well as evaluate applications that have been able to harness the horsepower provided by these platforms. This year’s work is particularly interested on new heterogeneous GPU platforms. Papers are being sought on many aspects of GPUs, including (but not limited to):
- GPU applications + GPU compilation
- GPU programming environments + GPU power/efficiency
- GPU architectures + GPU benchmarking/measurements
- Multi-GPU systems + Heterogeneous GPU platforms
CfP: High Performance Computing Symposium
The 21st High Performance Computing Symposium (HPC 2013),
devoted to the impact of high performance computing and communications
on computer simulations. Advances in multicore and many-core
architectures, networking, high end computers, large data stores, and
middleware capabilities are ushering in a new era of high performance
parallel and distributed simulations. Along with these new capabilities
come new challenges in computing and system modeling. The goal of HPC
2013 is to encourage innovation in high performance computing
and communication technologies and to promote synergistic advances in modeling methodologies and simulation. It will promote the exchange of ideas and information between universities, industry, and national laboratories about new developments in system modeling, high performance computing and communication, and scientific computing and simulation.
Topics of interest include:
and communication technologies and to promote synergistic advances in modeling methodologies and simulation. It will promote the exchange of ideas and information between universities, industry, and national laboratories about new developments in system modeling, high performance computing and communication, and scientific computing and simulation.
Topics of interest include:
- High performance/large scale application case studies
- GPU for general purpose computations (GPGPU)
- Multicore and many-core computing
- Power aware computing
- Cloud, distributed, and grid computing
- Asynchronous numerical methods and programming
- Hybrid system modeling and simulation
- Large scale visualization and data management
- tools and environments for coupling parallel codes
- Parallel algorithms and architectures
- High performance software tools
- Resilience at the simulation level
- Component technologies for high performance computing
Final CFP : Third Workshop on Parallel Computing and Optimization, PCO’13, Boston, USA
The Third Workshop on Parallel Computing and Optimization (PCO13) is held in conjunction with the IEEE IPDPS symposium, Boston, USA, May 24, 2013. Paper submission deadline is January 4, 2013.
The workshop on Parallel Computing and Optimization aims at providing a forum for scientific researchers and engineers on recent advances in the field of parallel or distributed computing for difficult combinatorial optimization problems, like 0-1 multidimensional knapsack problems and cutting stock problems, large scale linear programming problems, nonlinear optimization problems and global optimization problems. Emphasis will be placed on new techniques for the solution of these difficult problems like cooperative methods for integer programming problems and polynomial optimization methods. Aspects related to Combinatorial Scientific Computing (CSC) will also be treated. Finally, the use of new approaches in parallel computing like GPU or hybrid computing, peer to peer computing and cloud computing will be considered. Application to planning, logistics, manufacturing, finance, telecommunications and computational biology will be considered.
Please refer to the workshop webpage at http://conf.laas.fr/PCO13 for more details, and for submission instructions.
The workshop on Parallel Computing and Optimization aims at providing a forum for scientific researchers and engineers on recent advances in the field of parallel or distributed computing for difficult combinatorial optimization problems, like 0-1 multidimensional knapsack problems and cutting stock problems, large scale linear programming problems, nonlinear optimization problems and global optimization problems. Emphasis will be placed on new techniques for the solution of these difficult problems like cooperative methods for integer programming problems and polynomial optimization methods. Aspects related to Combinatorial Scientific Computing (CSC) will also be treated. Finally, the use of new approaches in parallel computing like GPU or hybrid computing, peer to peer computing and cloud computing will be considered. Application to planning, logistics, manufacturing, finance, telecommunications and computational biology will be considered.
Please refer to the workshop webpage at http://conf.laas.fr/PCO13 for more details, and for submission instructions.
Wednesday, 5 December 2012
AMD Gaming Evolved Newsletter
You are Jason Brody, a tourist stranded on a tropical island chain lost in a bloody conflict between psychotic warlords and indigenous rebels. Fighting to escape this beautiful but dangerous paradise, you’ll have to confront who you really are. Developed and published by Ubisoft, Far Cry 3 invites you on a journey through insanity, in which you’ll discover what you’re really made of, if you even live that long….
» Learn More
Never Settle Bundle
This year’s best games on the fastest GPUs! And with a value of up to $170 USD, the NEVER SETTLE bundle is the biggest game promotion in the history of graphics cards.
» Learn More
Find out more about AMD's gaming technologies and see for yourself why AMD is the leader in gaming platforms. Only AMD gives you high-performance processing and industry-leading graphics solutions making it the obvious choice for PC gaming. Visit the Newegg Desktop Gaming Center to learn more.
» Learn More
CUDA Webinars
Following the introduction of the Tesla K20 at this year’s Super
Computing conference, we already have some great feedback from
developers; here are just a few quotes.
Tesla K20 GPU is 2.3x faster than Tesla M2070, and no change was required in our code! - Senocak, Associate Professor in Boise State Univ
The K20 test cluster was an excellent opportunity for us to runTurbostream. Right out of the box, we saw a 2x speed up. - G. Pullan, Lecturer, University of Cambridge
Tesla K20 is very impressive. Our application runs 20x faster compared to a Sandy Bridge CPU. - A.Tumeo & O.Villa, Scientists, PNNL
We invite you to join us for new webinars about CUDA5 and the Tesla K20. During these live Webinars you will be able to get answers to your questions directly from the presenters. So don’t miss out and register today.
Inside Kepler Tesla K20 Family - Worlds Fastest and Most Efficient Accelerators
Presented by Julia Levites, NVIDIA and Stephen Jones, NVIDIA
Thursday, Dec 13, 2012 10am (PST) – Register Now
Best Practices for Deploying and Managing GPU Clusters
Presented by Dale Southard, NVIDIA
Wednesday, Dec 12, 2012 10am (PST) – Register Now
An Unlikely Symbiosis: Gaming and Super Computing
Presented by Sarah Tariq, NVIDIA
Tuesday, Dec 11, 2012 10am (PST) – Register Now
Introducing Fully Enabled Debugging of CUDA 5 Applications with Allinea DDT
Presented by Ian Lumb, Allinea Technologies
Wednesday, Dec 5, 2012 10am (PST) – Register Now
Tesla K20 GPU is 2.3x faster than Tesla M2070, and no change was required in our code! - Senocak, Associate Professor in Boise State Univ
The K20 test cluster was an excellent opportunity for us to runTurbostream. Right out of the box, we saw a 2x speed up. - G. Pullan, Lecturer, University of Cambridge
Tesla K20 is very impressive. Our application runs 20x faster compared to a Sandy Bridge CPU. - A.Tumeo & O.Villa, Scientists, PNNL
We invite you to join us for new webinars about CUDA5 and the Tesla K20. During these live Webinars you will be able to get answers to your questions directly from the presenters. So don’t miss out and register today.
Inside Kepler Tesla K20 Family - Worlds Fastest and Most Efficient Accelerators
Presented by Julia Levites, NVIDIA and Stephen Jones, NVIDIA
Thursday, Dec 13, 2012 10am (PST) – Register Now
Best Practices for Deploying and Managing GPU Clusters
Presented by Dale Southard, NVIDIA
Wednesday, Dec 12, 2012 10am (PST) – Register Now
An Unlikely Symbiosis: Gaming and Super Computing
Presented by Sarah Tariq, NVIDIA
Tuesday, Dec 11, 2012 10am (PST) – Register Now
Introducing Fully Enabled Debugging of CUDA 5 Applications with Allinea DDT
Presented by Ian Lumb, Allinea Technologies
Wednesday, Dec 5, 2012 10am (PST) – Register Now
Subscribe to:
Posts (Atom)