Pixie16 Analysis Software Suite
Analysis code for processing of .ldf files
|
Here you will find information on how to quickly start using this software. We will be working under the assumption that you have no desire to know the inner workings of the code, but simply want to get some graphs to look at. Because of the major overhauls the original guide is no longer relevant for this software. If you have an older version of the code, I refer you to the Original Quickstart.
This guide will assume that you are not going to be using ROOT to perform the analysis. However, we will assume that you will be looking at data that requires high resolution timing algorithms. This is the most common type of user at this point, if you do not require this analysis you can simply not follow those instructions. We will also assume that you have compiled the documentation successfully, otherwise you wouldn't be reading this...
Let's assume that we have a simple TOF style measurement going. We'll have three channels: two for VANDLE and one for our start detector.
The code has a number of prerequisites that need to be met in order for you to begin. Here is a handy check-list:
If you have successfully installed the prerequisites then you are now ready to prepare your Linux environment. You should add the following information into your ".bashrc", which is usually located in the ${HOME} directory of your username.
You should replace "/absolute/path/to" with the proper absolute path to the two directories. The last line in the above code is depends on the FORTRAN compiler that was used to compile the HRIBF libraries. The two you are likely to encounter are 'g77' and 'gfortran'. If you are unsure about the installation, contact your system admin.
When you are finished making the necessary changes to the '.bashrc', remember to source it.
Moving right along, now we are ready to modify the Makefile to the specific installation. There are a number of flags in the upper part of the Makefile that you can modify to suit your needs. The full list of these flags can be found in Makefile Flags. For now, we just need to make sure that the "PULSEFIT" flag is uncommented.
Now comes the most important part. This one is going to be the biggie, the whole mamma-jamma. The configuration file controls the whole operation now. For a complete overview of the configuration, see the page XML Based Configuration File.
NOTE: This file is read at runtime, you do not have to recompile when you make changes here!!
First, you should update the author information and description. This is not strictly necessary, but it makes it nicer when you are trying to figure out who made the file, and what they were trying to do.
We are assuming you want to look at VANDLE related data, so you're going to want to make sure that the Revision version is set to "F" for the Pixie16-250 modules.
You can also change the "energyContraction" and "eventWidth" here. For now, we will assume you're happy with whatever was there when you got the file. Some common values are 1e-6 s for eventWidth and 1.0 for the energyContraction.
OK, now we're onto the serious stuff, pay attention! The node DetectorDriver holds the various Processors and Analyzers that you are going to be using for the analysis. For our simple example we will be wanting the following pieces:
The two processor lines define the classes that will handle the manipulation of the data to measure our ToF. The analyzers, work specifically on the traces, and these two will provide the high resolution timing. There are some of the processors that take arguments into their constructors (Ge, Vandle), information on these arguments can be found in the pages for the respective class.
In some analysis, you do not need to define multiple processors to handle information. For example, the VandleProcessor does not need the Beta or Double Beta processors defined in order to perform time-of-flight calculations. This is because these processors pull the necessary detector summaries themselves and build the required information. In these scenarios, you may simply define the VandleProcessor, unless you require histograms built in the Beta or Double Beta processors.
The Map node tells the program what type of detector is plugged into a given Module/Channel combination. We have moved to this scheme since we now define both the energy calibration and walk calibration at this point. The tags for a channel are now defined as a comma separated list inside the "tags" key. Below is the sample code for our current example:
None of our channels will be corrected for walk, or energy calibrated. This example may be updated in the future to add in a clover.
We are now ready to input the timing calibrations. These calibrations are used to align the gamma prompts for the ToF measurement. In addition, this section defines the physical position of the bar relative to the initial position of the measured particle (gamma, neutron, etc.). This calibration differs from the previous one, as it is done on a bar-by-bar basis and not per channel.
If you do not include a calibration for a channel into this node, the program will provide a default calibration of 0.0 for all offsets. In addition, any of the offsets that are left out of the correction, for example if you did not measure an "xoffset", it will automatically be set to zero. This removes the necessity for the declaration of all of the detectors in the analysis.
Finally, the program now recognizes more than two start detectors. This is done through a list of "tofoffsets". Please refer to the sample code below, as well as, the sample configuration: config/Config.xml.example. The "loc" field in the start nodes denote the location of the starts. In the event that the start is a bar style detector, this will refer to the bar number. For a detailed description of these variables refer to Time Calibrations and VANDLE Setup.
Inside the TimeCalibration node we will have the following code:
You will find a detailed description of these variables in the Time Calibrations and VANDLE Setup section.
The Timing node contains all of the information necessary for the successful extraction of the high resolution timing from a trace. It defines things such as the trace length and delay, the ranges for the waveform, and the fitting parameters for various detectors. The most important things to update in this section are the TraceDelay and TraceLength. Please note the units that are in each of the parameters.
We will not be using the TreeCorrelator for this example, refer to Tree Correlator for more info on this.
Finally, you can change the output information from the notebook if you'd like. This is not a critical step.
Events are created by grouping together channels which triggered at similar times. This time window is controlled by the variable EventWidth located in the Global node of Config.xml. If two successive channels in the list are within "EventWidth" time, they are grouped together in an event. The variable is in units of Pixie16 clock ticks and must be multiplied by the appropriate sampling time to obtain the event width in ns. Please note that the "EventWidth" time window is only applied to successive events. Thus it is possible (depending on the total trigger rate) to have events that are longer than the specified time window.
After an event has been created it is sent for processing. The first step of processing is to calibrate the individual channels and summarize the information contained in the event. For each detector type that is present in the event an object called DetectorSummary is created. This object holds detector related information such as the energy, timestamp, and the multiplicity of the detector among other things. For example, the following command will retrieve energy of the scint in the event.
where revt is the name of the variable holding the raw event information and energy will contain the energy of the scintillator.
To retrieve the multiplicity associated with VANDLE ends use the following command:
The reference manual can provide a list of all commands to retrieve information from the detector_summary or the rawevent.
All plotting is controlled through the "plot" function defined in DeclareHistogram. This function is a C++ wrapper that has been placed around the DAMM count1cc and set2cc subroutines. This allows for the code to be easily changed between damm and ROOT histograms. For those using DAMM to view the output of the analysis all plots are created in the "drrsub.f" file located in the scan directory.
In order to plot into a histogram one must first define it in the DeclarePlots method of the used Processor. In addition, each Processor has a specific range of DAMM IDs that it is allowed to use. This is defined in the DammPlotIds.hpp.
To define a 1D and a 2D histogram, you must first define the variables for the histogram numbers:
This is generally found near the top of the '.cpp' file of interest. For the BetaScintProcessor, this will create histograms with IDs 2050, 2051, and 2052. We can now define the histograms themselves in the DeclarePlots method.
SA is a constant defined for compatibility with DAMM, see DammPlotIds.hpp for their definitions.
To plot a one dimensional histogram of energies:
You can send any type of numerical value to the plot function. The variable is rounded into an integer before being passed to the DAMM plotting functions. A two dimensional histogram is plotted with the command
and a three dimensional histogram (plotting a trace for example) uses the command
There are numerous examples in the code on how to do this.
After the compilation has completed successfully the executable pixie_ldf_c will be present. Run the pixie_ldf_c program as you would any other program:
./pixie_ldf_c hisname
Where "hisname" is the name of the damm histogram that will be created. At the scanor prompt load in the appropriate ldf file as follows
SCANOR->file tutorial/tutorial.ldf
Next start the analysis with the following command
SCANOR->go
After starting a variety of output should be printed to the screen. The first lists all the detectors that are being used in the analysis. This includes information about what is happening at various stages of the code, the status of reading in the configuration, and the creation of the detector summaries.
After completion of the analysis, end the SCANOR program
SCANOR->end
You are now ready to take a look at your output in DAMM. This concludes the main part of the tutorial.
This should complete the basics on how to setup and run the code. There are a variety of histograms predefined in the Processors. Remember, under the new framework, you do not have to recompile when you are switching out Processors. Changes to the source code (new histograms, gates, etc.) necessitate compilation.