Thursday, September 21, 2017

GIT setup for multiple users in Linux environment

A long time, I was away from posting anything here...

There are situations when people working on some project where they need to setup temporary version control for development threads. It is before they take it to logical control and that's when the project shall go to the central repository be it the same repository tool or different. In this case, we started to work on a project where we wanted to reach to a level before we checkin in the mail repository. Hence, we setup the GIT repository quickly.

It was a setup created to work in the same filesystem on linux machine. The steps which were followed to create the setup is as follows -

The git version used need to be setup -

  $ set path = ( //git-2.11.0/bin/ $path )
  $ git --version

In order to initialize the git repository -

  $ git init

A directory to initiate -
  $ mkdir




  $ touch /test

Adding these files to repository -

  $ git add fpga_testchip/test
  $ git status
  $ git commit

You can add and annotate the users to get the notifications in future and associate users to the repository -
  $ git config --global user.email "username@synopsys.com"
  $ git config --global user.name "username"

Adding another directory -

  $ git add
  $ git commit
  $ git log

In order to set up the remote server so that the repository could be accessed.

  $ git remote add /base/a/b/.git
  $ git push


  $ git push --set-upstream master

Another user need to run following command to clone the repository for his own usage -

git clone /base/a/b/.git

Further, the user could start adding his stuff in the repository and add, checkin his changes. In order to reflect it in global repository, user need to push his changes.

Happy GITing !!

Friday, March 13, 2015

Systemverilog UVM Tutorial #2 - Module, Class and Objects

So what all have been added in Systemverrilog ?

In Verilog, we are familiar with module :

    module test;
      ...
      .....
    endmodule

If a module has interfaces, it shall look like :

    module test(in_1, out_1);
      in in_1;
      out out_1;
      ...
      .....
    endmodule  

In Systemverilog, Classes are user-defined data types. SV has moved towards Object Oriented Programming and all the basic implementations of C++ kind of language added to create Systemverilog. Object Oriented Programming (OOP) is a programming language philosophy which wraps data in a container and define ways to write functions and tasks inside container that shall operate on the set of data. In addition, the creation of objects out of these classes are created with a concept named constructor. Similarly, to destruct the object, another concept destructor is implemented.

Overall, Classes are used for abstracting the data set within a container.

    class Test;
      int p;


      function new(string name = "Test");

        $display("Constructor");
      endfunction: new


      task set (int i);

        x = i;
      endtask


      function int get;

        return x;
      endfunction;


    endclass



The object creation can be done as follows -


    Test t = new();

If there is a testbench, the class object can be created and the task/functions can be used as follows -

    initial
      begin
         t.set(10);
         $display("Test class item x = %d", t.get());
      end

-------------------------


Inheritance



An important part of Object Oriented Programming is Inheritance where classes can be derived from another class. That means, A class can have all the data items of the base class and in addition, may have extended set of data items. The class which extends the base class is called derived class and owns all the properties of base class. The derived class may have new properties, can implement more functions and tasks to operate on the internal data items.


The implementation of inherited class is as follows -

    class derivedTest extends Test;
       int p;

       function new(string name = "derivedTest");
          super.new()              // This shall ensure object creation of the base class.
       endfunction: new

       task setp(int j);
          p = j;
       endtask

       function getp;
          return(p);
       endfunction
    endclass

In derivedTest, the data items are introduced in addition to base class data items. the derived class may have more functions/tasks to operate on data items.

Saturday, February 7, 2015

SystemVerilog UVM Tutorial # 1

Digital verification

The digital design verification has been a challenging field and there's a lot which has changed from last 10-15 years in the field of digital design verification. The verification job has got complex and critical for IP/subsystem as well as for SoC.


With Moore's law, we clearly see the complexity has increased many-fold and hence the life of verification engineer has been even more complex while at office and on project :-)
There are more logic to verify hence need of more verification scenario at subsystem or IP level. In the SoC verification context, the key is to fetch what all shall be verified so that the integration of various IP/subsystems are properly verified. The need for the same matter turns into extracting the detailed, exhaustive test scenarios which came from IP and Subsystem level as verification suite.

What is it that ensure verification sanity of SoC verification task -

First of all, Verification is a never ending and like keep expanding the context as the verification engineer gets involved.

Now, In order to create some way to ensure a good signoff for IP/Subsystem verification. The coverage  is such a parameter which shall be used to specify what all shall be covered for IP/Subsystem verification. There are different kind of coverage (typically supported by various toolsets from EDA vendors) -

  • Functional coverage - The coverage in terms of features need to be verified.
  • Code Coverage - all what shall be covered of RTL code during verification.
  • Toggle coverage  - The toggle coverage aims to look at all in/outs at subsystem(or internally) toggling or not.

Verification Environment topology

    The verification environment for any subsystem needs the IP/subsystem's RTL view and the verification IP. There are certain additional requirements to program the Subsystem's registers. Another requirement is to have a memory buffer which can be used to store data which shall be provided to IP or subsystem. This is typically for all the subsystem's which has internal DMAC (or a master interface e.g. AXI, AHB).
Verification environment shall look as follows -


    The verification IP typically designed in Verilog, C and e-language (Cadence proprietary) and for last few years, the OVM/UVM methodology has taken the complete space. More and more Verification IPs are modeled and designed in SV/UVM.

What is UVM ?

     The Universal Verification Methodology (UVM) is a standardized methodology for verifying integrated circuit designs. UVM is derived mainly from the OVM (Open Verification Methodology) which was, to a large part, based on the eRM (e Reuse Methodology) for the e Verification Language developed by Verisity Design in 2001. The UVM class library brings much automation to the SystemVerilog language such as sequences and data automation features (packing, copy, compare) etc., and unlike the previous methodologies developed independently by the simulator vendors, is an Accellera standard with support from multiple vendors: Aldec, Cadence, Mentor, and Synopsys.

So the key components as seen in the image are -

  • Driver
    The driver in verification IP is key component which is used to generate the data towards the DUT. In fact, the driver is responsible for taking the data from other component i.e. Sequencer in higher abstracted data packets and generates the data traffic on the virtual I/F.
  • Monitor

    The monitor is the component which is responsible for collecting all data coming on the virtual I/F. It could have a transactor to packetize data for a higher abstraction level(Typically, it could be wrapped in a class).

  • Sequencer

    The sequencer's main functions:
    • DUT and the verification environment initialized through sequencer.
    • configuring and the scenario generation with the sequences for the verification environment and DUV

  • Sequences
    The sequences are the key elements which are responsible to play through Sequencer. The sequence is the most primitive for the stimuli generation.
  • Scoreboard

    The scoreboard is one of the most important component. It is responsible to confirm whether the test pass or fail. The Scoreboard implementation varies a lot depending on the verification environment. Overall, it's the comparison between the reference (already available or extracted) and the results of the simulation.

Thursday, February 5, 2015

Debug:: makefile : make environment

Makefile based verification environment has some advantages specially, there are embedded software too which need to be compiled. Incremental compilation works pretty good.
Except, the issues with make are very cryptic. I came across a situation where an strange kind of problem popped up with no clues to look into the real problem. After a lot of debug, I found the reason and it was terrible experience.

The code segment as follows -

OBJ_C_CODE   = $(foreach d,$(CODE_DIR1),$(patsubst %.c,%.o,$(subst $(d),$(OBJ_DIR),$(wildcard $(d)/*.c))))

OBJ_S_CODE    = $(foreach d,$(ASM_DIR),$(patsubst %.s,%.o,$(subst $(d),$(OBJ_DIR),$(wildcard $(d)/*.s))))

create_obj_dir:
mkdir -p $(OBJ_DIR);

compile : create_obj_dir $(OBJ_DIR)/libsela_fw.a
@echo "TOP Makefile"
@echo "INCLUDE_PATH : $(INCLUDE_PATH)"
@echo "BOOT_CODES   : $(BOOT_CODES)"
@echo "BOOT_OBJS_DIR : $(BOOT_OBJS_DIR)"

$(OBJ_DIR)/%.o : ./code/%.c $(HEADER_FILES)
$(ARM_CC) $(foreach m,$(INCLUDE_PATH),-I$m) $(C_OPT) -o $@ $< ;

$(OBJ_DIR)/%.o: ./asm/%.s $(HEADER_FILES)
$(ARM_ASM)  $(foreach m,$(INCLUDE_PATH),-I$m) $(ASM_OPT) -o $@ $< ;

$(OBJ_DIR)/libsela_fw.a :  prova $(OBJ_C_CODE) $(OBJ_S_CODE)

$(ARM_AR) $(AR_OPT) $@ $(OBJ_C_CODE) $(OBJ_S_CODE) 


The dependency in this case was working fine but with a change in INCLUDEPATH, the compilation started breaking. The implicit compilation for "$(OBJ_DIR)/%.o : ./code/%.c $(HEADER_FILES)" stopped working without any clue.

The debug with make file as I performed -

$ make -n

So typically, it shows what all commands are coded against the target. It's very helpful but in this case, it stopped popping up the same error.

$  make -d

The output for the above command details out all dependencies which are there.  It tries out all implicit rules too and a very detail log file is prepared. At times, it's difficult to understand but with patience, we can look at lines to understand the sequence of operations -


....    Trying implicit prerequisite `/prj/hsi_ss/digvijay/PureSuite/VERIFICATION/puresuite_usb3/setup/.././..//Makefile.common.l'.
........
     Trying pattern rule with stem `Makefile.common.l'.
     Trying implicit prerequisite `/prj/hsi_ss/digvijay/PureSuite/VERIFICATION/puresuite_usb3/setup/.././..//s.Makefile.common.l'.
     Trying pattern rule with stem `Makefile.common.l'.
     Trying implicit prerequisite `/prj/hsi_ss/digvijay/PureSuite/VERIFICATION/puresuite_usb3/setup/.././..//SCCS/s.Makefile.common.l'.
........
     Trying implicit prerequisite `/prj/hsi_ss/digvijay/PureSuite/VERIFICATION/puresuite_usb3/setup/.././..//s.Makefile.common.w'.
     Trying pattern rule with stem `Makefile.common.w'.
     Trying implicit prerequisite `/prj/hsi_ss/digvijay/PureSuite/VERIFICATION/puresuite_usb3/setup/.././..//SCCS/s.Makefile.common.w'.
    Trying pattern rule with stem `Makefile.common'.


In case of the issue, this file also brings to and end without anymore conclusive point. 

Then, I found out a very strange issue...the implicit rules if fails, do not generate any error message properly. Digging into possible causes of implicit rules failures, I found that the INCLUDEPATH has pointed to a path which was not accessible. The reason being that the Admin team has been planning to remove some of the tools versions.

I just replaced the INCLUDEPATH with the good version which was available and the make command worked fine.

So the learning -
  • - If the implicit rules are not working, Check the possible compilation command which might be running.
  • - Checking any path which are involved there...and availability of the hierarchy of directory.
  • - Check if the options provided to tools are good one.
  • - Then, the make command MUST run and you should not have any problem.
  • - Additionally, you can apply the above 2 commands (make -n and make -d) for other debugging.

Have fun !!

Tuesday, December 23, 2014

AWK - A tool for almost everything....

AWK

Some very good links I want to keep at fingers -

Awk Introduction Tutorial – 7 Awk Print Examples

http://www.thegeekstuff.com/2010/01/awk-introduction-tutorial-7-awk-print-examples/

Awk Tutorial: Understand Awk Variables with 3 Practical Examples

http://www.thegeekstuff.com/2010/01/awk-tutorial-understand-awk-variables-with-3-practical-examples/

8 Powerful Awk Built-in Variables – FS, OFS, RS, ORS, NR, NF, FILENAME, FNR



http://www.thegeekstuff.com/2010/01/8-powerful-awk-built-in-variables-fs-ofs-rs-ors-nr-nf-filename-fnr/

Basics of AWK -

awk program consists of :
  •  An optional BEGIN segment
    • For processing to execute prior to reading input
    • pattern - action pairs
  • Processing for input data
    • For each pattern matched, the corresponding action is taken
  • An optional END segment
    • Processing after end of input data
BEGIN {action}
        pattern {action}
        pattern {action}
        .
        .
        .
        pattern { action}
END {action}

Examples

Some of the basic usage for AWK :

  • In order to extract column of a file - AWK can seperate out the columns of a file.
         $ awk '{ print $1}'

         The above command prints first column out of file.

         $ awk '{print $1, $2}'

         The above command in addition prints second column too.

         For the formatted printing, printf can be used as follows -

         $ awk '{ printf "first column %s second column %s", $1, $2}'

         The above command would print the formatted string as specified.

Friday, December 12, 2014

SCRIPTING:: Linux sed tool

sed is Stream Editor and a fabulous stram editor. It can do wonders for linux programmers working in scripting domain. For the professionals in semiconductor industry, it's an elixir at times.

Some of the most common things that can be done are as follows -

1./    s/..../..../ To Substitute


To replace all instance of black with white:

$ sed s/black/white/ new

There's some particular need to replace the output of a previous command in script. 

   #! /bin/csh -f

   foreach word ( `/bin/ls *.do` )
        sed 's/$1/$2/g' $word > /tmp/a
        \cp -rf /tmp/a $word
   end