Archive | Physics RSS for this section

Fastest Way to Calculate Hadronic Cross Sections

Many of the particle physicists may think that CompHEP is not a proper way of making hadronic calculations. Since in numerical calculations CompHEP treats each subprocess separately, calculation of a process with lots of subprocesses (as it happens usually in calculations for hadron colliders) can be a laborious task. In order to make the task simpler and enable non-GUI calculations both symbolic and numerical programs in CompHEP are equipped with the batch PERL scripts symb batch.pl and num batch.pl correspondingly.

Here’s an example how you can use these scripts:

1 -) Open CompHEP in your working directory as ./comphep

2 -) Enter a scattering process like pb,pb -> t,T.

3 -) Enter C-compiler and complete symbolic calculation.

4 -) You will get the numerical session GUI and see Process, SubProcess, Monte Carlo seesion…etc. Notice that you can only calculate subprocess cross sections here.

5 -) Open a new terminal but don’t close CompHEP numerical session window.

6 -) Open your CompHEP working directory where num_batch.pl and sym_batch.pl are located.

7 -) Write ‘./num_batch.pl -run vegas’

8 -) Write ‘./num_batch.pl -show cs’ and you’ll get the total cross section through all subprocesses.

OR edit process.dat to enter your process and use sym_batch.pl in terminal window to complete symbolic calculations.

Reference: http://arxiv4.library.cornell.edu/pdf/0901.4757v1

MacOSX: Solution to PYTHIA / HBOOK Problem in Cernlib

Last month, I was trying to compile some pythia codes to calculate snutau (scalar tau) cross section, Pt and eta distributions. But unfortunately my trusted g77 compiler insistently rejected to compile my code with the following errors:


Undefined symbols:
"_pylist_", referenced from:
_MAIN__ in ccJ7qu36.o
"_pyevnt_", referenced from:
_MAIN__ in ccJ7qu36.o
"_pyinit_", referenced from:
_MAIN__ in ccJ7qu36.o
"_hropen_", referenced from:
_MAIN__ in ccJ7qu36.o
"_hlimit_", referenced from:
_MAIN__ in ccJ7qu36.o
"_pystat_", referenced from:
_MAIN__ in ccJ7qu36.o
_MAIN__ in ccJ7qu36.o
"_pydata_", referenced from:
___g77_forceload_0.0 in ccJ7qu36.o
"_hfill_", referenced from:
_MAIN__ in ccJ7qu36.o
_MAIN__ in ccJ7qu36.o
_MAIN__ in ccJ7qu36.o
"_hrout_", referenced from:
_MAIN__ in ccJ7qu36.o
"_hbook1_", referenced from:
_MAIN__ in ccJ7qu36.o
_MAIN__ in ccJ7qu36.o
_MAIN__ in ccJ7qu36.o
"_hrend_", referenced from:
_MAIN__ in ccJ7qu36.o
ld: symbol(s) not found
collect2: ld returned 1 exit status

After a little research about HBOOK technology, i discovered that it’s a quite old component even before Fortran 77 and the only way to make it run in this way is to include Cernlib components while compiling. But that didn’t work either for MacOsX because probably MacosX version of Cernlib doesn’t support HBOOK libraries although it’s possible for Linux.

Anyway why do we have to use HBOOK? At the end of the day, we just want to have some data files to draw using Paw or Root and it’s possible using Pythia’s own components. So the principle must be “Use Pythia Commands to get an Output file because that’s the easiest way!” Here is my solution: I changed all HBOOK commands to PY..s in my code below and it worked perfect.

Write “g77 -o mycode.x -w mycode.f libpythia6421.a” to compile.

C...All real arithmetic in double precision.
      IMPLICIT DOUBLE PRECISION(A-H, O-Z)
      IMPLICIT INTEGER(I-N)
C...Three Pythia functions return integers, so need declaring.
      INTEGER PYK,PYCHGE,PYCOMP
C...Parameter statement to help give large particle numbers
C...(left- and righthanded SUSY, technicolor, excited fermions,
C...extra dimensions).
      PARAMETER (KSUSY1=1000000,KSUSY2=2000000,KTECHN=3000000,
     &KEXCIT=4000000,KDIMEN=5000000)
C...EXTERNAL statement links PYDATA on most machines.
      EXTERNAL PYDATA
      DIMENSION IHI(10)
C...Commonblocks.
C...The event record.
      COMMON/PYJETS/N,NPAD,K(4000,5),P(4000,5),V(4000,5)
C...Parameters.
      COMMON/PYDAT1/MSTU(200),PARU(200),MSTJ(200),PARJ(200)
C...Particle properties + some flavour parameters.
      COMMON/PYDAT2/KCHG(500,4),PMAS(500,4),PARF(2000),VCKM(4,4)
C...Decay information.
      COMMON/PYDAT3/MDCY(500,3),MDME(8000,2),BRAT(8000),KFDP(8000,5)
C...Selection of hard scattering subprocesses.
      COMMON/PYSUBS/MSEL,MSELPD,MSUB(500),KFIN(2,-40:40),CKIN(200)
C...Parameters.
      COMMON/PYPARS/MSTP(200),PARP(200),MSTI(200),PARI(200)
C...Supersymmetry parameters.
      COMMON/PYMSSM/IMSS(0:99),RMSS(0:99)

C....Real definitions...
      REAL CTETM,CTETP,ETAUP,ETAUM,PTAUP,PTAUP2,PTAUM,PTAUM2
      REAL PTAUMX, PTAUMY, PTAUMZ
      REAL PTAUPX, PTAUPY, PTAUPZ
      REAL PTX, ETAX, XMTAU
C ....rapidity: YX
C--------------------------------------------

C...First section: initialization.
      NEV=10000

C...Select generic SUSY generation.
C...39:all MSSM processes except Higgs prod.
C...41:stop pair(ISUB=261-265), 42:slepton pair(201-214), 45:sbottom(281-296)
C...ISUB=210,211,212 for slepton+sneutrino
      MSEL=0
      MSUB(214)=1

C...Set SUSY parameters in SUGRA scenario.
      IMSS(1)=2		!mSUGRA parameters given to PYTHIA
      RMSS(8)=230D0	!m_0
      RMSS(1)=360D0	!m_1/2
      RMSS(5)=10D0	!tan(beta)
      RMSS(4)=1D0	!sign(mu)
      RMSS(16)=0D0	!A_0

C...Channels
      do kk=1949,1974
      MDME(kk,1)=0
      enddo
      MDME(1950,1)=1

C...Channels for W boson
      do kw=206,208
      MDME(kw,1)=0
      enddo

C...If interested only in cross sections and resonance decays:
C...switch on/off initial and final state radiation,
C...multiple interactions and hadronization.
      MSTP(11)=0	! 1:QED radiation
      MSTP(61)=0	! 2:ISR
      MSTP(71)=0	! 1:FSR
      MSTP(81)=0	! 1:multiple int.
      MSTP(111)=0 	! 1:hadronization

C...Initialization for the LHC.
       CALL PYINIT('CMS','e-','e+',3000D0)

C...List resonance data: decay channels, widths etc.
       CALL PYSTAT(2)

C...Book Histograms
	CALL PYBOOK(10,'PT',100,0D0,1000D0)
	CALL PYBOOK(20,'ETA',100,-5D0,5D0)
	CALL PYBOOK(30,'MTAUTAU',100,0D0,2000D0)

C--------------------------------------------------

C...Second section: event loop.

C...Loop over the number of events.
       DO 200 IEV=1,NEV
        IF(MOD(IEV,500).EQ.0) WRITE(6,*)
     &  'Now at event number',IEV

C...Event generation.
         CALL PYEVNT

C...List first few events.
          IF(IEV.LE.5) CALL PYLIST(1)

C...Fill the masses of interesting (s)particles.
C...Fill pt of particle
       DO I=1,N
C...Catch tau- lepton
       IF((K(I,2).EQ.15).AND.(K(I,1).EQ.1)) THEN
       PTX=SQRT(P(I,1)**2+P(I,2)**2)
       CTETM=P(I,3)/SQRT(P(I,1)**2+P(I,2)**2+P(I,3)**2)
       ETAX=-LOG(TAN(ACOS(CTETM)/2.))
       etaum=P(I,4)
       ptaum2=P(I,1)**2+P(I,2)**2+P(I,3)**2
       ptaum=SQRT(P(I,1)**2+P(I,2)**2+P(I,3)**2)
       ptaumx=P(I,1)
       ptaumy=P(I,2)
       ptaumz=P(I,3)

       CALL PYFILL(10,DBLE(PTX),1D0)
       CALL PYFILL(20,DBLE(ETAX),1D0)
C......       CALL HFILL(10,PTX,0.,1.)
C......       CALL HFILL(20,ETAX,0.,1.)
       ENDIF
C...Catch tau+ lepton
       IF((K(I,2).EQ.-15).AND.(K(I,1).EQ.1)) THEN
       etaup=P(I,4)
       ptaup2=P(I,1)**2+P(I,2)**2+P(I,3)**2
       ptaup=SQRT(P(I,1)**2+P(I,2)**2+P(I,3)**2)
       ptaupx=P(I,1)
       ptaupy=P(I,2)
       ptaupz=P(I,3)
       ENDIF
       XMTAU=SQRT((ETAUP+ETAUM)**2-(ptaup2+ptaum2
     .   +2.0*(ptaupx*ptaumx+ptaupy*ptaumy+ptaupz*ptaumz)))
C...       CALL HFILL(30,XMTAU,0.,1.)
       CALL PYFILL(30,DBLE(XMTAU),1D0)
       ENDDO

C...End of documentation and event loops.
200    CONTINUE

C--------------------------------------------

C...Third section: produce output and end.

C...Cross section table.
       CALL PYSTAT(1)

C...Histogram close
C...       CALL HROUT(0,ICYCLE,' ')
C...       CALL HREND('SUSY')

C...Histograms.

       OPEN(11,file='pt.dat',STATUS='unknown')
       IHI(1)=10
       CALL PYDUMP(3,11,1,IHI)
       CLOSE(11)
       OPEN(22,file='eta.dat',STATUS='unknown')
       IHI(1)=20
       CALL PYDUMP(3,22,1,IHI)
       CLOSE(22)
       OPEN(33,file='MTau.dat',STATUS='unknown')
       IHI(1)=30
       CALL PYDUMP(3,33,1,IHI)
       CLOSE(33)
       CALL PYHIST
       END

A Simple C++ Simulation For Beginners

Phenomenology in physics, mostly deals with the simulation of events and obtaining data from simulations to compare it with real time event datas. Obviously during event processing, it’s not necessary to give extra effort to visuality. So one should not confuse it with visual simulations. We’r just making event based calculations. Therefore you can ask what exactly do we simulate? or can any calculation be a simulation? Notice that in scientific experiments, you always need a satisfactory amount of statistics. So basically you should have a scenario for gathering statistics in simulations. Here i’d like to present a calculation of “Pi” number as a simulation sample. Here we are collecting statistics via producing random numbers which is included a circle with r=1.

Calculation: Pi number
The method: Monte Carlo Simulation
Fundamental Formulas: (pi)r 2 and x2+y2 = 1 (Note that radius of circle is unit 1.)

1-) Write below code and compile it writing “g++ pi.cpp -o pi.x”

#include<iostream>
#include<math.h>
#include<stdlib.h>
#include<time.h>

using namespace std;
int main(){
	int jmax=1000; // maximum value of HIT number. (Length of output file)
	int imax=1000; // maximum value of random numbers for producing HITs.
	double x,y;    // Coordinates
	int hit;       // storage variable of number of HITs
	srand(time(0));
	for (int j=0;j<jmax;j++){
		hit=0;
		x=0; y=0;
		for(int i=0;i<imax;i++){
			x=double(rand())/double(RAND_MAX);
			y=double(rand())/double(RAND_MAX);
		if(y<=sqrt(1-pow(x,2))) hit+=1; }          //Choosing HITs according to analytic formula of circle
	cout<<""<<4*double(hit)/double(imax)<<endl; }  // Print out Pi number
}

2-) To understand the code: We have just 2 loops here. The inner loop produce random number (<1) and uses these numbers for coordinates x,y. “If” condition increases hit number if this (x,y) point locates in the area of quarter circle (Look at the figure below.)

Hit Production Area

The outer loop resets our variables and print out Pi number according to formula: Area of Quarter Circle/ Area of Square = (1/4)πr2/r2 = (1/4)π= accepted hits / total hits = hits / imax.

3-) Run it as “./pi.x > pi.dat”

4-) Draw output file.

Pi Graph

I used below “root macro” to read and convert it to a .root file.

{
	gROOT->Reset();
	ifstream in;
	in.open("pi.dat");
	Float_t x; Int_t nlines = 0;
	TFile *f = new TFile("pi1.root","RECREATE");
	TH1F *h1 = new TH1F("h1","pi_grafik",100,2.5,4.0);
	TNtuple *ntuple = new TNtuple("ntuple","pi","x");
	for (nlines=0; nlines<10000; nlines++) {
		in >> x;
		if (!in.good()) {break;}
		if (nlines < 5) {printf("x=%5f\n",x);}
		h1->Fill(x);
		ntuple->Fill(x);
		nlines++;
	}
	in.close();
	f->Write();
	printf("%d deger bulundu\n",nlines);
	h1->SetXTitle("pi");
	h1->SetYTitle("Olay");
	h1->Draw();
}
  • Paste above root macro in a C file and name it as “pintuple.c”
  • Open your root analysis program in the same directory you saved pintuple.c : “root”
  • Execute the file: “root> .x pintuple.c”
  • You’ll get an ntuple file called “pi1.root”
  • Write “TBrowser g” in root.
  • Open pi1.root file and you’ll get the above histogram. Congratulations 🙂

Cernlib Manual Installation

If you are using Unix/Linux based operating system and having difficulties for installing CERNLIB, the best way is to try the simplest way: manual installation.

1- ) Go to http://cernlib.web.cern.ch/cernlib/version.html and click “compressed tar files” link which is proper with your system.

2-) cd / (open your root folder)

3-) mkdir cern

4-) copy 3 tar files that you’ve already downloaded (cernlib.tar.gz, cernbin.tar.gz, include.tar.gz)  into cern folder you created.

5-) Go into your Cern folder you’ve already created and write the command:
tar -xvf cernlib.tar
tar -xvf cernbin.tar
tar -xvf include.tar
6-) Create symbolic links in this folder:
ln -s 2006 pro
ln -s 2006 new
(If you don’t have a folder 2006, change it with the name you have exp: 2004,2005,2007…etc)

7-) Now you should set some system variables: write “export” on the command line to see all fixed system variables and their values which has been declared in the past.

8 ) You must change the variables related to CERN. So either write following commands on the command line or add them into your /etc/bashrc file (for unix). If you add them into your /etc/baschrc file, you won’t need to set these variables everytime you open your computer.

export CERN=<Your Cern Directory>
export CERN_ROOT=<Your Cern Directory>
export CERNLIB=$CERN/pro/lib
export CERNBIN=$CERN/pro/bin
export PATH=$PATH:$CERNBIN

IMPORTANT: Please check your environmental variables by writing “export” in your command line. If you installed Cernlib by Fink or apt-get before, you may not get correct CERN_ROOT, CERNLIB or CERNBIN variables. In this case, open corresponding bashrc/profile files and edit your CERN variables. For fink, edit cernlib… csh, sh files under /sw/etc/profile.d folder.

Running ROOT on Lxplus Server of CERN

Mac OsX terminal window has powerful features than any other operating systems but if you are working on a remote server you need to know a little more about it.

If you are using MacOSX and if you are connecting lxplus@cern using regular command ssh, the first thing you will observe is that your system cannot open X11 window which is needed by ROOT. To overcome this problem, you should connect using either

ssh -X username@lxplus.cern.ch
or
ssh -Y username@lxplus.cern.ch

X option, provides you a ssh connection forwarding X11. And Y option, provides a ssh connection Forwarding trusted X11.

After you logged in using your user name and password, you have 2 options to run ROOT.

1 -) Entering ROOT variables below:

export ROOTSYS=/afs/cern.ch/sw/lcg/app/releases/ROOT/5.28.00/i686-slc5-gcc43-opt/root
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOTSYS/lib
export PATH=$PATH:$ROOTSYS/bin

2-) Or just source Athena Framework and setup root from Athena: (Note that Athena consists much more than ROOT)

export AtlasSetup=/afs/cern.ch/atlas/software/releases/16.6.0/AtlasSetup
alias asetup='source $AtlasSetup/scripts/asetup.sh'
asetup 16.6.0

–Recently this procedure has been changed and became eaiser. Please google for sourcing Athena framework–

Then just write “root” and start your analysis.

Abbreviations at CERN

If you are someone new at CERN, lets say an undergraduate or a Ph.D. student, sooner or later you’ll realize the abbreviation list you have to learn. And this is not an ordinary task to be able to understand while someone is talking about AODs,ESDs,DPDs…etc. So you’d better memorize the list below before losing your mind like “what the hell is going on here!!” I am presenting a short dictionary below, actual list is quite longer. :))

ACCU: Advisory Committee of CERN Users; High level committee to discuss the projects of member countries and users.

AFS: Andrew File System; a network file system that allows users to reach datasets and experimental results. AFS is a distributed file system available for UNIX and other operating systems.

AMI: Atlas Metadata Interface; AMI is the primary physicist web tool that provides physics metadata bookkeeping and access to the Dataset Selection Catalog for ATLAS through an optimized search interface.

ALICE: A Large Ion Collider Experiment; it is also the name of the corresponding detector which is located outside of CERN facility.

AOD: Analysis Object Data; This is the file format of reduced datasets from ESD.

ARDA: A Realization of Distributed Analysis; is an LCG project whose main activity is to enable LHC analysis on the grid.

ATHENA: is the name of the framework which is a derivative of the Gaudi Common Framework Project.

ATLAS: A Toroidal LHC ApparatuS; the name of one of the detectors which is 44 meters long and 25 meters in diameter.

ATLFAST: is the package for fast simulation of ATLAS detector and physics analysis.

BSM: Beyond Standard Model; generally used for expressing models other than Standard Model like Supersymmetric, extra dimensional, exotic, technicolor…etc.

CAF: Cern Analysis Facility; is a cluster at CERN running PROOF. It can be used for prompt analysis of pp data as well as selected PbPb data. Furthermore calibration programs can be run on the CAF.

CASTOR: Cern Advanced STORage Manager; is a hierarchical storage management (HSM) system developed at CERN used to store physics production files and user files.

CERN: European Organization for Nuclear Research, (Originally French: Organisation Européenne pour la Recherche Nucléaire)

CDS: Cern Document Server; This is simply the site located at the address http://cdsweb.cern.ch/

CINT: is an interpreter for C and C++ code. It is useful e.g. for situations where rapid development is more important than execution time.

CLIC: The Compact Linear Collider; is another planning Cern project which will collide particles at 3 TeV in a linear orbit.

CMS: The Compact Muon Solenoid Experiment; is also the name of the corresponding detector at Cern.

COMPHEP: is a stand-alone software to generate events and high energy physics calculations via Matrix Element Monte Carlo method.

CSC: 1) Cathode Strip Chambers; 2) Cern School of Computing

DDM: Distributed Data Management; allows user to manage distributed experiment datas to the Tier centers around the world.

DFS: Data File System;

DLF: Distributed Logging Facility is a powerful logging facility which allows centralised logging and accounting in CASTOR.

DPD: Derived Physics Dataset; This is the file format of datasets which is derived from either ESDs or AODs.

DQM: Data Quality Monitoring; is an experimental online/offline data monitoring system developed by Cern.

EDM: Event Data Model; is the platform that allows the use of common software between online data processing and offline reconstruction.

ESD: Event Summary Data; is the file format derived by raw data in collision experiments at Cern.

EVO: Enabling Virtual Organizations; is a java network messenger and communication tool dominantly used in Cern.

GANGA: is an easy-to-use frontend for job definition and management with a basic interface, implemented in Python.

GLIMOS: Group Leader in Matters of Safety for Experiments and Tasks

LCG: LHC Worldwide Computing Grid; is a global collaboration of more than 170 computing centres in 34 countries. The mission of the WLCG project is to build and maintain a data storage and analysis infrastructure for the entire high energy physics community that will use the Large Hadron Collider at

LHC: Large Hadron Collider;

LHCb: Large Hadron Collider b-quark; is an experiment set up to explore what happened after the Big Bang that allowed matter to survive and build the Universe we inhabit today.

LHCf: Large Hadron Collider Forward experiment; consists of two small calorimeters each one placed 140 meter away from the Atlas interaction point. Their purpose is to study forward production of neutral particles in pp collisions.

LSF: Local Sharing Facility; This is a batch service by Cern computing department.

MICROMEGAS: MicroMEsh GAseous Structure; is a detector first developed at CERN in Geneva for high-energy physics charged-particle tracking applications.

N-Tuple: in set theory, n-tuple is a sequence (or ordered list) of n elements, where n is a positive integer. In high energy physics, n defines separate measurements such as rapidity, missing transverse energy, photon, muon, tau multipicity…etc. versus events.

NICE: A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the single unix specification. It is also required a NICE account to be able to use the system for users at Cern.

PANDA: Production and Distributed Analysis system; Job defining and executing system at Grid.

PLUS: Public Login User Service (LXPLUS)

PROOF: Parallel ROOT Facility; allows interactive parallel analysis on a local cluster. Interactive means that you see the results right away.

OSG: Open Science Grid;

ROOT: An object oriented framework for large scale data analysis.

Savannah: is the department at Cern which is responsible for development, distribution and maintainance of software.

SPS: The Super Proton Synchrotron (SPS) is a 6.9 km long particle accelerator at Cern. This is also used for injection the particles to LHC.

SUSY: Supersymmetry; an extra dimensional model which assumes that fermions or bosons has an extra symmetry with an invisible corresponding boson field or fermion field.

SM: Standard Model; is the model that unified known 3 interactions (electromagnetic+weak+strong) and its particles excluding the effects of gravitation which is the 4th known interaction field.

TOTEM: Total Cross Section, Elastic Scattering and Diffraction Dissociation at the LHC; is the experiment studies forward particles to focus on physics that is not accessible to the general-purpose experiments.

TIER: grid centers that collects the experiment data around the world. They also named according to the data they received such as Tier-0, Tier-1, Tier-2.

TPC: The Time Projection Chamber (TPC) is the main tracking detector in the central barrel of the ALICE experiment at LHC.

TSO: Territorial Safety Officer.

VO: Virtual organization; this is related to the certification at Cern Grid.

Installing Scientific Packages on Mac OS X

If you decided to buy a Macbook, the first thing you need to know may be that “It’s not a Big Mac at all but rather it’s a Mac”. Don’t even think you can eat all the Mac Universe that goes around and is almost parallel to our universe. It even may take quite a lot of time to figure out what kind of a trouble you have, especially if you are someone in the scientific era. Don’t forget that some says “People who buy Macs are the same people who said BETA is better than VHS 15 years ago”.

For an average physicist like me, it may be long term run to build a useful system on Macbook.
Here is a list of softwares that a scientist may need:

Compilers:
gcc, g++, gfortran, g77, f77,Java, …etc.

Package Managers:

MacPorts, Fink, Darwin, RPM, Apt-get, …etc.

Analysis Softwares:

ROOT, Jas3, Aida, Octave, …etc.

Computing Softwares:

Mathematica, Matlab, …etc.

Physics Phenomenology Softwares:

Comphep, Calchep, Pythia, Fluka, Geant4, Madx,
Isajet, Prospino …etc.

Depended Packages:

Cernlib, Openmotif, X11, Latex …etc.

Office Tools:

Excel, Text Editors, Presentation tools, PDF, DVI, PS viewers, …etc.

Web Browsers

Mozilla Firefox, Internet Explorer, Safari, Opera, Netscape, …etc.

Next the main question comes up! How to install these softwares on Mac Os X 10.5.x ??

Here is a prescription for beginners:
(1). First of all, install compilers
(2). Install some package managers
(3). Install other software packages using compilers and installed package managers.

(1)

GCC Compiler Installation: Use Macbook installation DVD, you will find a package called Xcode in the DVD, if you search for it. Just install Xcode development tool and your gcc compiler will also comes with it. Then open a terminal window and just write “gcc”. If its error turns like “no input file”, it means you successfully installed the gcc compiler.
Or you can check its location by writing “which gcc” on the command line.

G77 Compiler Installation: Unfortunately you may not find g77 support combining with any other packages or package managers. So you should install it manually:

  • Download tar file which is most suitable for your computer g77-intel-bin.tar.gz (Intel Mac only) or g77-bin.tar.gz (PowerPC only)
  • Open a terminal window and write “gunzip g77-bin.tar.gz”
  • Write “sudo tar -xvf g77-bin.tar -C /.” for install the spesific files into your usr/bin/ folder.

Gfortran Compiler Installation: There’s an dmg installation file for gfortran compiler. So you should download and install it by just clicking on it.

(2)

MacPorts Installation:

MacPorts provides an open-source software for managing, compiling, installing, upgrading packages which include libraries, utilities on the Mac Os X operating system. One can easily install MacPorts downloading its dmg file and just click on it. After installation you can open a terminal and learn basic Macport commands writing “man port” . Recently for Mac users, MacPorts seem to be the most powerful package manager with almost 8000 packages. Some useful MacPorts commands are;

  • port list : Get the lists of all packages so that one can choose the package name before installation.
  • port upgrade <pkgname> : Upgrades the specific packages with newest versions.
  • port search <pkgname>: Search the package name (or a part of it) in the complete list of packages. You can make wild card searches like *root* or *X11*..etc.
  • port install <pkgname>: –Most used MacPort command- downloads, compiles, stages and installs the package you entered.
  • port installed: Shows the installed packages on your computer.
  • port uninstall <pkgname>: Deactivate the package.
  • port deps: Shows you the dependencies of a package with other packages.
For complete list of commands, please read the documentation of MacPorts here.

FINK Installation:

Fink automates the process of downloading the binary package, or downloading source package, applying a patch, compling, and installing it. As a package manager, fink is the most advanced unix based software and one can easily find its documentations through web. There is an impressive number of applications you can install via fink. Fink was designed carefully as to not disturb or modify the system. It can be uninstalled with a single command ‘sudo rm -r /sw’ You definitely want f77, imagemagick, ghostscript for X, ispell, xdvi, and gimp if you want to edit graphics or xv to preview them.

To download Fink visit http://www.finkproject.org and install its dmg file on your Macbook.

  • Open a terminal window and write “fink” to learn its usage and options.
  • Write “apt-get” (Fink also installs apt-get package)
  • Fink downloads packages from mirror servers around the world. So I strictly recommend you to configure mirror settings. Write “fink configure” to configure mirror options. Fink will start to ask you questions about configurations. Press “Enter” for default values. When it asks a question about mirror choose “4- Nearest mirrors from your continent”. Choose your country. And answer all other questions to complete configuration definitions. If you cannot download packages from chosen mirrors while installations, you should reconfigure these settings.
  • Fink supports stable packages as well as the unstable ones which are not tested or has some exceptions during their running process. To see the supported package list write “fink list” on your command line.
  • To be able to install unstable packages write “fink configure” and activate unstable package option when it asks you about stable/unstable packages. Press “Enter” to give default answers to other questions.
  • To download and install a package write “sudo fink install packagename” on the command line. (ex. “sudo fink install cernlib”)
  • WARNING: Fink automatically finds the dependencies of packages and usually downloads a set of package. That may probably extand your installation time. So i do not recommend anyone to download huge packages -ex. ROOT- via fink. Because it may take quite a lot of time with the related other packages like gcc4, X11, openmotif, …etc.

(3)

CERNLIB Installation:

  • First, make sure you installed fink and the compilers gcc, g77.
  • Open a terminal window and write “sudo fink install cernlib”
  • Fink will download and install cernlib in a few minutes.
  • Write “cernlib” on terminal window and see related cernlib locations. If you get an error message, make sure you installed fink and the compilers properly.

OR you may prefer to install Cernlib Manually to avoid non-standard installation.

COMPHEP Installation:

  • Visit Comphep’s website, be a member and then download its .tar files.
  • Make sure you installed the compilers gcc and g77 properly.
  • Open a terminal window in the folder that comphep’s .tar file exits.
  • Write “sudo tar -xvf comphep-4.5.1.tar”
  • Write “cd comphep-4.5.1”
  • Write “./configure”
  • Follow the commands and write “make” to complete installation.
  • Write “make setup WDIR=calculations” to create a user file called calculations.
  • To run the application write “cd calculations” and “./comphep”

ROOT Installation:

  • Download ROOT analysis framewok visiting ROOT official web site at CERN. If you’r obsessed to perfection, you should download the latest version for your system. (Mac OS X intel or power pc)
  • To open the related .tar file write “sudo tar -xvf root_v5-1.24.00.macosx105-i386-gcc-4.0.tar” on the command line.
  • Write “cd rroot_v5-1.24.00.macosx105-i386-gcc-4.0”
  • Write “./confgure –help” and “./configure” to make installation settings.
  • Write “make” and complete the installation.

You can also set environmental variables to run root for anytime you open a terminal window by writing “root” command. To edit bash and add environmental variables;

  • Open a termianl window and write “cd /” to go root directory.
  • Write “sudo nano etc/bashrc” to open a editor to edit bash profile.
  • Add the following lines in to bashrc file, save and exit (CTRL-O and CTRL-X)
    • export ROOTSYS=/usr/local/root
    • export DYLD_LIBRARY_PATH=$ROOTSYS/lib
    • export PATH=$PATH:$ROOTSYS/bin
  • You should change $ROOTSYS variable (first line) if you installed ROOT framework in to another directory.

General Instructions for Installations:

  • Download the software in tarball.
  • Decompress tarball writing “sudo tar -zxvf <software>.tar”
  • Cd into folder and edit Makefile. DON’T FORGET TO EDIT IT IN COMPATIBLE WITH YOUR PATHS and COMPILERS or UPDATE YOUR COMPILERS.
  • Write “./install” or “./configure” if you have such shell files to run.
  • Write “make” and start compiling.
  • Trace errors if you have any.

I strongly recommend you to register Apple’s developer website http://developer.apple.com/ for recent updates and development tools.