ASTRON & IBM Center for Exascale Technology

Newsletter 8

Astrodata written in light: analogue over fibre and photonic beamforming

Talking to Bert Offrein, Folkert Horst and Peter Maat

Peter Maat is based in Dwingeloo, Bert Offrein and Folkert Horst both work in the IBM Research Laboratory in Zurich. Together, they’re looking for ways to get the radio signal from the antenna to a central processing location and to make sure the antennas are aiming for the right spot at the right time. Their research involves sending analogue signals through optical fibre efficiently. And designing a new optical chip with potential reaching far beyond the SKA. We spoke to the three of them.

“It all begins with the antenna,” Peter Maat explains, “giving us electrical signals. We need to get them to a place where they can be processed. This is what usually happens: you put the analogue signal into a coaxial electrical cable and after about 100 meters, you have a big expensive machine that digitizes it and sends it on.”

“This is not a practical solution for large parts of the SKA. Operating complex signal processing machinery close to the antennas is expensive: you need to transfer loads of electrical power to remote locations, you must protect your antennas from the interference caused by the electronic equipment and the maintenance needs to occur to a large extent at distant locations under harsh conditions. So we’ve been working on ways to overcome this unattractive and costly situation by directly transferring the antenna signals to the central processing unit at a distance of 2 to 10 kilometres, as required in SKA.

Completely new

Bert Offrein continues: “You can’t use coaxial electrical cable for this, the distance is too large. Optical fibre is much more suitable for the job. But as we want to write the data in light in high quality and receive them in high quality at the other end of the line, there are some issues we need to deal with. How much noise will there be at the reader, around the electronics, the detector? How much will the signal deteriorate during transport? In other words: will an optical link be good enough? And will it be commercially viable?”

It will, according to Peter. “We’ve developed a prototype with which we demonstrate that photonic links fulfil the SKA’s requirements for its signal transport system. Furthermore, we showed that producing and maintaining the optical modules according to the SKA specifications is much more attractive than buying commercial off-the-shelf devices. Even if we take into account that the initial investment is considerable.”

“We’ve worked closely with some Dutch companies, which was very fruitful. We’re not just building a telescope; we’re also giving an impulse to high-tech businesses. Besides, what we have developed here allows us to build a telescope with a completely new architecture. It’s revolutionary. This photonic link will be integrated into SKA phase I, the low-frequency aperture array.”

Photonic beamforming

And then there is phase II, with its mid-frequency aperture array (MFAA). “The MFAA needs about 100 little antennas for every square meter”, Peter explains. “You can imagine that you’ll have to combine these signals in an intelligent way”. To “look” in a certain direction you have to delay each signal slightly to create a perfect match at the output where the signals are combined. This technique is called beamforming. To date mostly electronic beamformers are applied, but they have some disadvantages with regard to frequency and bandwidth. That’s why we’re working on photonic beamforming instead.”

Bert gives us an update: “We’re working on a new kind of photonic chip that replaces the electronic beamforming, which is able to adjust the delay between the antennas so they’re looking at the exact same spot in the sky. The advantage of photonic beamforming is more efficient signal transport and processing. A demonstrator system was realised by IBM and is currently being tested at ASTRON.” Folkert Horst is excited to be part of the process: “It’s very motivating to see something I’ve developed, this optical beamformer, actually being integrated into a demonstrator. Even more so in this case, radio astronomy has always captured my imagination, already as a kid.”

ASTRON’s radio-over-fibre module

Optical fibre usually transports digital data. Now, we’re looking at ways to optimize transport of the analogue radio signal over fibre. This is ASTRON’s radio-over-fibre module, both with and without lid.

IBM optical beamformer chip

IBM’s optical beamformer chip can bring the optical signals from four antennas into phase and combine them into one optical signal, thus reducing the effort for signal transport and further processing by a factor four

Beyond SKA

Peter sees a wide range of applications for optical links and photonic beamforming. “Radio astronomy will be the first area to incorporate this. But it will most likely find its way into next-generation mobile communication, radar, satellites, surveillance systems, medical appliances, anything sensor-related. I feel like we’re riding a wave with this new technology. If we can make this feasible for commercial production, things might go pretty fast.”

Counting milliseconds and Watts: in search of the best accelerators and algorithms Talking to John Romein and Bram Veenboer

Last month, August 2016, John Romein has presented a new paper: “A Comparison of Accelerator Architectures for Radio-Astronomical Signal-Processing Algorithms”. Accelerators are needed to enable high-performance computing on SKA-level and various types have been tested. Together with Bram Veenboer, he is now moving on to testing FPGAs, which will also be added to the comparison. And in the meantime, they’re working on a complex algorithm for efficient imaging.

“The paper I’ve presented in Philadelphia last month has actually turned out pretty nicely”, John Romein says. “We’ve put various accelerators to the test, running our algorithms on them. GPUs(Graphics Processing Units) by NVIDIA and AMD, Intel’s many-core CPU called Xeon Phi, and a Digital Signal Processor (DSP) by Texas Instruments.”

Easy to program

“GPUs showed the best results, overall. They have great compute speed, are quite energy efficient and relatively easy to program. Programming a DSP, for instance, is a lot more work. Using GPUs, the SKA should be able to reach a high fraction of its theoretical peak performance.”

“Our next step, which we’re already working on, is adding new-generation Field-Programmable Gate Arrays (FPGAs) to the comparison. FPGAs have been around for quite a while now, but a recent innovation has made them programmable in a high-level programming language. They can now execute floating-point calculations, making them more suitable for high-performance computing. We’ll be looking into that over the next year.”

Close finish?

“FPGAs have been superior so far in terms of energy consumption. But GPUs have rapidly evolved, so it might be a close finish. We keep an open mind; we’re trying to do the research without any bias. And I’m really curious to see which architecture will win in the end.”

devices to measure the instantaneous energy consumption

Photo: we built our own devices to measure the instantaneous energy consumption of each of the accelerators

Accelerators will help to process the raw radio signal, the data deluge from the SKA telescope. Signal processing is essential if we want to make glorious high-res sky images. This is where Bram Veenboer comes in.

Visual noise

“My PhD is all about finding the best algorithm and the best architecture for SKA imaging”, says Bram Veenboer. “We’ve been developing a new algorithm, which should do better than the existing ones. It needs to be faster, more energy-efficient and the image it generates should be at least as good. It must be accurate, exact, with little visual noise. And I’m happy to report that it looks promising. Our new algorithm may even deliver higher quality images than the old ones.”

“We have seen that the new imager runs much more efficiently using accelerators than it does on a traditional CPU. This could mean we can achieve significant savings in terms of compute resources for the imaging pipeline. Furthermore, we are now finalizing the algorithm and getting ready to test the new algorithm on LOFAR, the Dutch low-frequency array. If we can prove that it works for LOFAR, we can build a good case for using it in the SKA as well.”

The bandwidth-baseline oracle: an unforgiving tool predicting energy consumption

Talking to Albert-Jan Boonstra and Rik Jongerius

When it comes to actually being able to operate the SKA, one of the main questions is: how much energy will it take? With a given energy budget and the expected development of energy prices, we have a fairly good idea of how many megawatts we can use. But the world’s largest radio telescope still has to be built. Nobody knows exactly how much energy it will consume. Nobody at all? Well, Rik Jongerius and Albert-Jan Boonstra seem to have some answers.

“After extensive testing, our calculation tool is now approaching its final form”, Rik Jongerius announces. “It’s the first tool in radio astronomy that actually comprises the entire processing chain, from antenna to image. We’re working with a huge load of parameters to paint the full picture. That should give us a better sense of the grand total than each step calculated separately.”

Get rid of the white

“The results we have so far are quite confronting. As you can see in the diagram below, it will be impossible to use the full bandwidth (or data volume) from all the antennas at the same time. Or even the full data volume from 60% of the antennas. The white area is what we do not have enough energy budget for, based on current technology.” It is an unforgiving oracle.

“Astronomers, of course, won’t be happy with so much white. They will want to use the SKA’s full potential, not having to cut away bandwidth when using a longer baseline. So this is what project DOME will have to yield: technological breakthroughs that will change the game and drive the white out of this diagram. By introducing accelerators that are more energy efficient, for instance. Or by creating smarter algorithms, demanding less compute power. Or possibly a combination of the two.”


Our calculations show that using the full data volume from all antennas (or even from 60% of them) is impossible, based on current technology. There is simply not enough power to cover the energy consumption. We will need better accelerators or smarter algorithms in order to make this possible.


“This tool is extremely useful”, Albert-Jan Boonstra agrees. “As Rik said, it is the first tool to take the whole processing chain into account. Also, it uses a sophisticated algorithm that extracts all the relevant characteristics from the source code of the application you’re running, a lot more than you can ever enter into the equation by hand.”

He goes on to illustrate one of DOME’s great values: “A smart tool like this will only work if you develop it in close consultation with radio astronomy adepts, as we’ve been doing within DOME. If you don’t cross-pollinate and incorporate both forms of specific know-how, your calculations may differ from the SKA’s future reality by a factor 10 or much more. For instance, if there are large distances between your antennas, you’ll need to correct the effect of the earth’s rotation, or the star you’re aiming for will move out of view. For large distances this is computationally very costly.”

Influencing SKA design

The tool is finding its way in SKA circles. As we speak, team member Gero Dittmann is in Malta, to see if the SKA Science Data Processor (SDP) consortium might benefit from it. Also, André Gunst uses it for his work regarding SKA phase II, the mid-frequency array (MFAA). And Rik adds: “I was very happy when the MFAA consortium asked for advice on the back-end of their design. Their own focus is on the front-end. We’ve been able to show them how front and back interact, using our tool.”

Prepare for take-off: brand new FPGA platform is being prototyped

Talking to André Gunst, Gijs Schoonderbeek and Leandro Fiorin

André Gunst and Leandro Fiorin are both working on new technology for the SKA’s CSP (central signal processor). The CSP translates digitized telescope signals into the data we need, to create a visual image of what is happening in deep space. Leandro has designed an ASIC implementing a custom energy-efficient and programmable accelerator architecture: a new chip which has been optimized to minimize the power consumption of the CSP kernels. André’s focus is on the CSP system design. Together with some colleagues from Australia and New Zealand Gijs was involved in developing a new FPGA board.

10 to 60 times less energy

“ASIC stands for Application-Specific Integrated Circuit”, Leandro Fiorin explains. “The one evaluated was optimized for the SKA’s low-frequency domain, in particular considering the requirements of SKA phase I. However, it can be easily adapted to the requirements of SKA phase II, as well as to those of the mid-frequency instrument. Preliminary results look promising: We are now finishing a comparison of our solution versus implementations of the CSP pipeline on FPGA, GPUs, and CPUs. Depending on the algorithm and platform you use, our ASIC is 10 to 60 times more energy efficient.”

“Currently, we have only been able to measure energy consumption on chip level so far. And we will have to take the whole system into account. Still some work to do!”

Working fast

André Gunst and his team have been focusing on a new FPGA-platform”the Gemini board. “In 2015, we started our collaboration with Australia and New Zealand. This year in March, a board designer from Australia came over to Dwingeloo to cooperate more closely with Gijs Schoonderbeek on designing the new platform. By working together, fully dedicated, we were able to develop an FPGA platform in just three months’ time. That usually takes at least half a year, so I’m really happy about the progress we’re making.”


The new FPGA board consists of 22 layers and features the latest FPGA device and 8 gigs worth of HMC memory modules. A prototype should be ready for testing in October.

“The FPGA design has gone to a production company, where they will make a prototype. The board consists of 22 layers and features the latest FPGA device and 8 gigs worth of memory modules. The prototype should be ready in October. In the meantime, our firmware engineer is developing the interface firmware, so that we can test all the interfaces as soon as the board arrives. Then we will put the power supply on, hook everything up and see how it runs. It’s an exciting prospect!”

Thinking like humans do: we want our info fast, correct and not too pricy

Talking to Yusik Kim

Yusik Kim and his team members have developed a highly pragmatic cost model for science data centres. “Even though it involves a number of assumptions, we can now for the first time calculate the cost of running a regional science datacentre for the SKA. I had to do a lot of digging on the internet to check architectures, connections and prices, but the results are good.” Rewarding work. Nevertheless, there is something else for us to talk about. Something more exciting. Something to do with predicting our actions.

Predictive caching

“We’re working on a lot of things related to the SKA’s science data processor, the SDP”, says Yusik Kim. “But one thing that I am particularly enthusiastic about is predictive caching. It will make it possible for us to know what data you’re going to need, before you even ask for it. So we can bring it to you faster.”

“There are many ways to store data, ranging from flash memory to hard disk and magnetic tape. Retrieving data from tape and putting it on a hard drive for you to read, may take up to several hours. Whereas you may need the information in minutes. What we’re trying to do, is to recognize patterns in metadata values in order to predict which information you might be interested in. For instance, when you’re looking at a certain piece of sky, you might also want to see the neighbouring area.”

Mimicking the brain

“This falls under the umbrella of what IBM calls cognitive storage. If you’ve seen the movie Inside Out, you have a pretty good idea of the concept. Memories, or bits of information, pop out from their shelves when something else triggers them. That is what we’re trying to mimic: the associative memory of the human brain.”

“Inside Out” (©Disney Pixar)

In the movie “Inside Out” (©Disney Pixar), memories pop out from the shelves when something triggers them. Our aim is to mimic this human-brain process by predicting search requests and having the data ready before the request is made.

“We are now in the stage of proof of concept. It makes sense to pre-fetch things, so you will have your information faster. But we cannot keep moving data around all the time without setting some priorities, as we would be wasting bandwidth and polluting the cache. So the main question is: how accurate must our predictions be, in order to make this worthwhile?”

From PhD to doctorate

On September, 20th Rik Jongerius (IBM Research) successfully defended his dissertation entitled “Exascale Computer System Design: The Square Kilometre Array” at the Eindhoven University of Technology. Rik performed his research at the ASTRON & IBM Center for Exascale Technology in Dwingeloo in the context of the DOME project.

His research focused on computing for the future Square Kilometre Array radio telescope and design methodologies of exascale computing systems in general. One of his main contributions is an application-specific computing model for radio telescopes, which already influenced the design of the future radio telescope. The results led to a redesign to reduce the cost of computing. Furthermore, Rik developed a generic analytic multi-core processor performance model to analyze the performance and energy efficiency of applications and architectures at exascale, suitable for extremely fast design-space exploration.

Rik Jongerius

Doctor Rik Jongerius

PhD update

Rik Jongerius started working on the DOME project in 2012 and is the first PhD candidate to obtain his doctorate. Three of our colleagues, Erik Vermij, Andreea Anghel and Bram Veenboer are still pursuing their doctoral degree within the DOME project and are expected to graduate in 2017. Erik's work focuses on near-data processing while Andreea works on modeling of exascale networks and hardware-independent application profiling. Bram’s aim is to accelerate the processing needed for creating sky images

Conference: Science in a Digital World

On 13 October, the Netherlands eScience Center will hold the 4th National eScience Symposium in the Amsterdam ArenA. The focus of this symposium will be: “Science in a Digital World”. The special track devoted to Big Data in Astronomy is jointly organized by the NL eScience Centre and ASTRON. As the Dome research has been focusing on Big Data from the start, we expect that this symposium will also be of interest to our readers.

In the Astronomy track, the list of speakers will include John Swinbank (Princeton University), Rees Williams (University of Groningen), Richard Fallows (ASTRON), Chris Broekema (ASTRON), Dorothea Samtleben (Leiden University), and Alex Ninaber (ClusterVision).

Check the eScience website for more info.

INVITATION: Dome workshop on designing for testability

ASTRON and IBM cordially invite colleagues interested in testing digital systems to attend this interactive and hands-on Dome workshop. The workshop will be held on 17 November 2016. This will be a practical and interactive get-together, and we encourage you to take an active part in the discussions. You can count on presentations and demonstrations by engineers form Variass, Batenburg, Transfer/JTAG, IBM, and ASTRON.

This workshop will address production and functional testing of digital systems. Topics to be presented and discussed include:

  • What exactly needs to be tested in digital systems?
  • What can be tested?
  • How to prepare a test programme?
  • What are limitations in testing?

In addition to these test considerations, practical tests and test approaches will be presented as well. These include X-ray tests, optical inspection, flying probe/needle tests, needle bed tests, and boundary scan. Afterwards, there will be hands-on demonstrations of several test set-ups.

Check the event website for more information. Please subscribe via the link above or by sending us your details through


17 November 2016

10.00 AM until late in the afternoon


ASTRON and IBM Center for Exascale Technology

Oude Hoogeveensedijk 4

7991 PD Dwingeloo


Van de Hulst auditorium (sessions)

Hooghoudt Room (demos and hands-on set-ups)

In memory of Karin Spijkerman

We are very sad to have lost our dear ASTRON and Dome colleague, Karin Spijkerman. She died on 16 July 2016, aged 52.

Karin had been on board since 2008 as a project controller at ASTRON. Over the years, she was involved in various grant applications. In this way, she contributed to ASTRON’s renovation project, Apertif, Dome, MidPrep, ALERT and SNN projects such as LOFAR SNN II and SKA-TSM.

Together with Emmy Boerma, Karin was the face of project control, always striving for greater professionalism. She played an important part in restructuring our project administration. She was great at making spreadsheets and creating formulas to make everybody’s work a little easier. And she was strict. Many of us remember the stern look on her face when we were late with our timesheets.

Karin leaves behind a husband and two grown daughters. We wish them all the strength in the world.