Mac OS X specific notes *** Instructions before Starting libMicro *** # Disable Open directory and LDAP using Directory Utility app # Turn off airport # Turn off spotlight. In terminal, execute the following: sudo service com.apple.metadata.mds stop # Turn off Time Machine in System Preferences # Wait at least 2 minutes after boot to desktop for boot cache to settle down *** Make and run quickstart *** make ./bench >output.txt gives you a text file named output.txt with the results of one run. ./multiview output1.txt output2.txt >compare.html gives you a html file comparing two runs. *** Makefile *** The Makefile invokes Makefile.Darwin which invokes Makefile.com.Darwin. Just invoke make, with options if necessary, and everything should build correctly. The binaries are placed in a directory called bin-ARCH where ARCH is the default or specified when building via the ARCH flag. options for invoking Makefile are: ARCH defaults to i386 if you just want to build for ppc, you can specify make ARCH=ppc this will put the results in bin-ppc to build fat/multi architecture, specify make ARCH=fat the makefile will automatically build with ARCH_FLAG="-arch ppc -arch i386 -arch x86_64" and put the results in bin-fat to build with only two of the architectures see below ARCH_FLAG defaults to -arch $(ARCH) to build fat/multi architecture, specify make ARCH_FLAG="-arch ppc -arch i386" ARCH=fat this will put the results in bin-fat OPT_FLAG defaults to -g to build optimized, specify make OPT_FLAG=-s SEMOP_FLAG defaults to -DUSE_SEMOP to eliminate SEMOP usage, specify make SEMOP_FLAG= this is needed on some lower-end systems (e.g. M63) These can be combined, e.g. make ARCH=ppc SEMOP_FLAG= *** Before running benchmarks *** The shell script create_stuff should be run before any benchmarking this script takes care of raising the process limits which would otherwise cause several of the tests to fail - if not you will see: Running: pipe_pst1 fork: Resource temporarily unavailable in your stderr during the runs. After you run create_stuff, the system then needs to be rebooted. *** running the benchmarks *** The shell script "bench" will run all the benchmarks, or you can pass it a parameter to run a single benchmark, e.g. bench lmbench_bw_unix Watch for: # WARNINGS # Quantization error likely;increase batch size (-B option) 4X to avoid. in the output To see an example run the supplied testbench script Add or adjust the -B parameter for any benchmark that fails. The Quantization error will refer to the benchmark preceding the error, not the one following... A typical run: $ make clean $ make $ ./create_stuff $ ./bench > output1 Running: getpid for 0.13353 seconds Running: getppid for 3.65609 seconds Running: getenv for 0.20924 seconds Running: getenvT2 for 0.37437 seconds Running: gettimeofday for 0.58077 seconds etc... Use the supplied multiview script to compare runs like: multiview output1 output2 > compare.html open compare.html (safari launches) will show output2 results as a percentage change from the output1 results *** Adding additional benchmark tests *** Look at the sample file trivial.c. This demonstrates how to do argument passing, the flow of control of a benchmark, etc. for the trivial case. The tests starting with "lmbench_" were ported from the lmbench suite, so they might be good examples as well. *** Things to do *** * port the rest of the lmbench benchmarks into this framework * create website that will allow easy ability to compare many builds across many machines with historical repository of runs * document better how to write a benchmark for this framework (started in trivial.c) * check this into xnu/test * create new benchmarks *** Leopard notes *** Due to rdar://4654956 and its original, rdar://2588252 you cannot run these tests on Leopard without removing the cascade_lockf test. There may be other tests which panic a Leopard system.