Getting VBM via anonymous ftp: There are three files, coms.tar, suite.tar, and valid_suite.tar, that contain the files and directory structures for the UNIX command files, VHDL Bench Mark suite source, and the VHDL Validation Suite, respectively. They may be copied in UNIX compressed form from VERIFY.EL.WPAFB.AF.MIL (134.131.22.151). The files are located in pub/VBM on VERIFY.EL.WPAFB.AF.MIL. Since they are in compressed format, the UNIX uncompress utility will be required to uncompress them. Once they are uncompressed, the UNIX tar command will be required to extract their contents. Benchmark Executors : Before starting to run any benchmarks, you need to read this note, along with "unixinfo.txt" in the "readme" directory. Make sure you "copy" (VMS) or "tar" (UNIX) the subdirectories "bench" and "coms" from your top-level or home directory, so the command files will work properly. The subdirectories "bench" and "coms" do not have to exist before you enter the command if you are using UNIX; "tar" will create it. The general format for running a VMS batch job is : $ submit command_file_name.com The "/que=que_name" and "/log=log_file_name" qualifiers may be used to control which batch queue the job is submitted to, and the pathname of the log_file. See "unixinfo.txt" for a description of the UNIX equivalent. The "bench" subdirectory contains the VHDL benchmark "shells". Also in this directory is the utility that takes the shells and your input parameters and generates proper VHDL source code to run for timing data collection. The program that generates VHDL source from shells is "gen.vhd". It is written in VHDL and the command files to analyze, model generate, and build it are "generate.com" and "generate.unix" for VMS and UNIX, respectively, in the "coms" subdirectory. The "coms" subdirectory contains the same directory structure as "bench". For each "bench" directory with a shell file, there are command files (VMS - test.com and UNIX - test.unix) in "coms" to analyze, model generate, build, and simulate the benchmark. These command files were designed for numerous consecutive analyze/model generate/build/sim cycles for the same model, so after the "sim" command, the run, kernel, entity, and architecture files are deleted from the VLS library. If this is not desired, the command files must be edited to remove these "vls del" commands and to add the "/replace" (VMS) or "-replace" (UNIX) option to the "build" command. The "readme" subdirectory contains documentation. File "test.edt" has the TEST NUMBER, PATHNAME, and PURPOSE (description) of each test. The tests are organized into subdirectories by the VHDL features they test. Each feature category has its own subdirectory. File "matrices.edt" lists all the feature categories in column format, along with each test number in row format. The categories tested in each benchmark are then "X"-ed off in the appropriate column and rows. It would probably be helpful to print both "test.edt" and "matrices.edt". File matrices_132col.edt is the 132-column version of matrices.edt, which is 80 columns wide. When you are ready to start running the benchmarks, first analyze, model generate, and build "gen", using the command file "generate.com" (VMS), or "generate.unix" (UNIX). Then choose a test, change directories (set def for VMS), (cd for UNIX) to that subdirectory, and print the shell either to the screen or a printer, because you need to read the comments to see what the parameters are. Look at the EXAMPLE section, and use the exact same file names in your "sim gen..." command. After you have decided what parameter values to use, enter the "sim gen..." command. You will then have a (hopefully) syntactically correct VHDL model named "test.vhd", or something very similar. When "gen" has successfully completed generating a description, you will get an assertion violation, along with the message "done". You then can use the "test.com" (VMS) or "test.unix" (UNIX) command file in the corresponding subdirectory of "coms" to run the job in the non-interactive mode. For UNIX users, see "unixinfo.txt" to set up your .cshrc file to collect timing data, if you want it. Your command to start the job will be in the format % csh test.unix >& log_file_name & This will cause all messages related to this job to be written to the file "log_file_name" (you can name this whatever you want). It also runs the job in the background, instead of in interactive mode. When your job is reported "done", check "log_file_name" to see the timing data or any errors that might have occurred. You can also "more" or "cat" this file while the job is running to check which command is executing. "unixinfo.txt" explains how to interpret the timing data that will be in "log_file_name". You can then re-run the test, varying the parameters. *** NOTE *** ******************************************************************************* In order for the command files to work, you must either have your default directory set to where the shell resides when executing the "sim gen..." command OR specify the complete pathname for the output file in the "sim gen..." command so that the output file ends up in the same directory as its shell. If this is not desirable, the command files must be edited to remove the "set def" (VMS) or "cd" (UNIX) lines. ******************************************************************************* ******************************************************************************* Users with toolsets that do not support generic parameters in top-level entities must use front_end.vhd and alternate_gen.vhd instead of gen.vhd. Instead of the "sim gen/param=..." command, you must edit front_end.vhd and set the constants ifile, ofile, p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, and p11 to the appropriate parameter values. "ifile" corresponds to the input file name (the shell file name), "ofile" corresponds to the output file name (usually "test.vhd"), and p1 through p11 correspond to parameters 1 through 11 as defined in the shell file comments. You must count the number of characters in your input and output files and set the string lengths for ifile_name and ofile_name accordingly. Then you must edit alternate_gen.vhd and set the string lengths for the input and output files to the same values as in front_end.vhd. After editing, you need to analyze and model generate alternate_gen.vhd (entity : gen; architecture : gen), then analyze, model generate, build, and simulate front_end.vhd (entity : front_end; architecture : front_end). Simulation of front_end causes the VHDL source file to be generated. Each time new parameter values are desired, the edit, analyze, model generate, build, sim cycle must be done. A command file (front_end.com (VMS) or front_end.unix (UNIX)) resides in [.coms] to do all but the editing. ******************************************************************************* For VMS users, the timing collection commands are already in the command files; however, these commands do not provide timing data for any subprocesses spawned (as in model generation, build, and simulation). To get this data, you need to check with your system manager to see what kind of system accounting, if any, is being done. If "process" accounting is done, ask the system manager to execute a command in the following general form AFTER you have run some benchmarks $ account/binary/output=DATA_FILE.DAT/since=DD-MM-YYYY/user=YOUR_USER_ID where DATA_FILE.DAT is any name you choose for the output file, DD-MM-YYYY is the date you started running the benchmarks (check with the manager to see how far back the accounting files go), and YOUR_USER_ID is your login name on the system. When you receive the DATA_FILE.DAT file, use the following command to look at it $ acc/full DATA_FILE.DAT This displays one subprocess's data per screen. You can use the date/time data in the log_file to help identify which subprocess corresponds to which command. Only model generate, build, and sim commands spawn subprocess, so nothing in the accounting file will correspond to the analysis commands. When you have matched a command with a subprocess, record the "CPU time" data for the command. When you have done this for each command in a log_file, go through the log_file and subtract the "Elapsed CPU Time" number above each command from the one below it to get the remaining CPU time. Add this time to the subprocess CPU time from the accounting file. This is the total CPU time for the command. I am not familiar with the other types of accounting, although I know that "image accounting" can also be used. If you have any questions or problems understanding the system, or if you encounter errors, please phone me : Capt Karen Serafino (513)255-8635 *************************************************************************** Addendum: Included is a selection of parity generator models for varying number of inputs. These models represent behavioral and structural implementations of these parity generators. They are in the tar file called parity.tar. Command files and a README file are included with the models. Added 26 August 1991 by CPT Michael Dukes