Supported by iFDAQ and Hardware Event Builder

The intelligent, FPGA-based event builder and
Data Acquisition System (iFDAQ) of COMPASS
– an exemplary system for the future?
D. Steffena, I. Konorovb, J. Novyc, O. Subrta, M. Bodlakd, Y. Baib, V. Frolove,
V. Jaryc, S. Huberb, D. Levitb, M. Viriusc
(a) CERN , Genève, Switzerland, (b) Technische Universität München, Germany
(c) Czech Technical University, Czech Republic, (d) Charles University, Czech Republic
(e) Joint Inst. For Nuclear Research, Russia
TIME
iFDAQ and Hardware Event Builder
Data Handling Card (DHC)
Frontend cards (~300k channels)
......
HGeSiCA
modules
......
......
HGeSiCA
modules
Slink
multiplexers
2-4 SLinks
......
......
IPBus
DHCmx
IPBus
DHCmx
......
......
IPBus
DHCmx
......
Gandalf
modules
TIGER VXS data
concentrators
(up to 18 links)
......
Control network
COMPASS inner network
......
Gandalf
modules
CATCH
modules
Slink
multiplexers
2-4 SLinks
......
IPBus
CATCH
modules
~1000 links
......
IPBus
DHCmx
DHCmx
IPBus
DHCmx
FPGA:
Xilinx Virtex6
memory:
4GB DDR3 SDRAM
firmware:
o DHCmx (12:1 multiplexer)
o DHCsw (8x8 switch)
o DHCsb (PCIe spillbuffer)
interfaces:
• TCS (Trigger Control System)
• 1 Gb Ethernet control network
(IPbus)
• 16 serial data links (SLINK)
• PCIe (for spillbuffer)
64-96x Slinks
......
IPBus
~250 Modules
28 VME crates
form factor: 𝜇TCA / AMC standard
6U VME carrier card
DHCmx
......
(8-12 Slinks
per card x 8 cards)
IPBus
DHCmx
8x SLink
DHCsw
IPBus
8x SLink
8 readout
computers
~64 TB disk
pool
present
status
10Gb/s router
DHC on VME carrier
Card as used for
DHCmx and DHCsw
throughput: 3 GB/s as DHCsw
Gateway
DHC as used for DHCsb
CASTOR
RCCAR-Software functionality
On-spill data rate: 1.5 GB/s
Buffering on all levels of
event buiding
500 MB/s sustained rate
• 3 independent interfaces:
o Time distribution (TCS)
o Data flow (SLINK)
o Slow control (IPbus)
• Automatic resynchronization
of FE modules
• Data check/FE-error handling
• Data throttling
• Replacing missing/
defective frames by
empty frames
• Self-synchronized event
building data flow
ATCA standard
COMPASS
Scale
Upgrade Time Distribution System
• Bidirectional PON (Passive Optical Network)
16x16 topology
• UCF protocol (Universal Communication Framework)
• Receiving node status information
• Synchronous node configuration (topology
reconfiguration)
• IPbus protocol for Slow Control
near
future
End of
• 4 DHC per carrier card
• 3 carrier cards
=> 12 AMC slots
• Interconnection via full mesh
• Integration of spare resources
• IPMC – Intelligent Platform Management
Controller (ATLAS, xTCA Interest Group)
Extended intelligence:
• Automatic recognition and compensation for hardware failure
• Automatic load balancing
• Continuously running DAQ
Scaling Possibility
10 GB/s
DHCmx
100 GB/s
ATCA carrier
DHCxx
Cross
switch
144 x144
DHCxx
DHCxx
DHCxx
48
crosspoint
switch:
FPGA:
4 x 12 channel
Optical transceiver
Ethernet IPbus
TCS receiver
32
FPGA
RTM
Full mesh interconnect
Vitesse – VSC3144-02
• fully programmable
Xilinx Artix-7
• cross switch control and
monitoring
• Hub to AMC modules
• Example event for automatic evaluation of DAQ
topology and configuration
Scenario for:
• Xilinx 7-series FPGA
• SLINK interfaces replaced by Aurora
DHCmx
Crosspoint Switch
Software
• Machine learning algorithm for data quality
monitoring
DHCsw
10000
1 DHCsw
2 layers of DHCsw
1 TB/s
Matrix of
20 DHCsw
Matrix of
20 DHCsw
Matrix of
20 DHCsw
Matrix of
20 DHCsw
Matrix of
20 DHCsw
4 layers of DHCsw
costs [k€]
far
future
DIALOG Communication
Zone 3
•
•
• Control of DAQ
configuration
through web and
C++ GUI
• Multithreaded event
processing and error
detection
• DAQ status
monitoring and
system overview
4 x 16
Initial intelligence:
Zone 2
iFDAQ - Properties
1.2 M€
1000
100
80 k€
10
4 k€
1
10
100
data throughput [GB/s]
1000
Supported by