Writing Assembler code for x86_64 & Aarch64 architectures

Printing a loop message output on both x86_64 and Aarch64 architectures.

Output:

Loop: 00 
Loop: 01 
Loop: 02 
Loop: 03 
Loop: 04 
Loop: 05 
Loop: 06 
Loop: 07 
Loop: 08 
Loop: 09 
Loop: 10 
Loop: 11 
...
Loop: 30

x86_64

This is a reference guide to what our group followed in order to generate the output above on a x86_64 architecture.

Source code

.text                         /* or section.text - used for keeping the actual code */
.globl    _start              /* tells kernel where program execution begins */

start = 0                     /* starting value for the loop index; note that this is a symbol (constant), not a variable */
max = 31                      /* loop exits when the index hits this number (loop condition is i<max) */
zero = 48                     /* start of decimal notation for character 0 */

_start:
    mov     $start,%r15       /* loop index start is in register 15 */
    mov     $10,%r13          /* put value 10 into register 13 */

loop:
    mov     $zero,%r14        /* loop index */
    mov     %r15,%rax         /* move data from register 15 into accumulator register */
    mov     $0,%rdx           /* put value 0 into data register */

    div     %r13              /* divide rax by register 13 and put quotient into rax and remainder into rdx*/

    mov     $zero,%r14        /* loop index */
    add     %rax,%r14         /* add quotient(in rax) to register 14 */
    movb    %r14b,msg+6       /* add 6 bits to register 14 */

    mov     $zero,%r14        /* loop index */
    add     %rdx,%r14         /* add remainder(in rdx) to register 14 */
    movb    %r14b,msg+7       /* add 7 bits to register 14*/

    movq    $len,%rdx
    movq    $msg,%rsi
    movq    $1,%rax
    syscall

    inc     %r15              /* increment index */
    cmp     $max,%r15         /* see if we're done */
    jne     loop              /* loop if we're not */

    mov     $0,%rdi           /* exit status */
    mov     $60,%rax          /* syscall sys_exit */
    syscall

.data                         /* for SEGFAULT error, stores as read/write */

msg:    .ascii "Loop:    \n"
        len = . - msg

Our idea here was to leave some blank space in our msg string “Loop:    \n”, and have the quotient and remainder of 10 divided by the loop index added to the address of the msg + 6 bytes for the first digit and msg + 7 bytes for the second digit to produce the 2-digit decimal number after “Loop: “.

Layout asm

gdb’s layout asm was a useful debugging tool to help me figure out exactly what was going on in the memory and registers.

So for example, on the source code above(“lab3.s”), you could do:
as -g -o lab3.o lab3.s

  • -o: output filename
  • -g: request that symbol information be attached to the binary for debugging purposes

Run the linker:
ld -o lab3 lab3.s

Run GNU debugger:
gdb lab3

Some useful debugging commands:

  • Set a breakpoint: b 19
  • Start program: r
  • Switch to asm layout:layout asm
  • Step one line: s
  • Step over (don’t go into function call): n
  • Print value in register: info register 14 or i r 14
  • Print value in specific address: print "%s\n", 0x30
  • Add x number of bits to an address:

    print "%s\n", 0x30+4
    print "%s\n", 0x30+8
    etc.

Aarch64

Here is a reference to replicate the same output on the AArch64 architecture.

I was able to replicate a similar solution to the one for the x86_64 architecture.

.text
.globl _start

start = 0   /* loop index */
max = 31    /* number to stop loop at */
num = 48   /* starting ascii value for 0 number */
/* index = %r10 - to store register 10 into index macro */

_start:
    mov     x3,start

loop:
    mov     x4,10           /* for calculating quotient/remainder */

    /* calculate quotient - store in x6 */
    udiv    x6,x3,x4      /* x6 = x3 / 10 */

    /* calculate remainder - store in x7 */
    msub    x7,x4,x6,x3   /* x7 = x3 - (10 * x6) */

    adr     x1, msg         /* msg location memory address */
    add     x6,x6,num       /* atoi */
    add     x7,x7,num       /* atoi */
    strb    w6,[x1,6]       /* store in msg + 6 bytes memory location */
    strb    w7,[x1,7]       /* store in msg + 7 bytes memory location */

    /* print */
    mov     x0,1            /* file descriptor: 1 is stdout */
    mov     x2,len
    mov     x8,64           /* write is syscall #64 */
    svc     0               /* invoke syscall */

    add     x3,x3,1         /* increment index */
    cmp     x3,max          /* check for end of loop */
    bne     loop            /* loop if compare returns false */

    mov     x0,0            /* status -> 0 */
    mov     x8,93           /* exit is syscall #93 */
    svc     0               /* invoke syscall */
 
.data

msg:    .ascii  "Loop:    \n"
        len = . - msg

Differences between the x86_64 and Aarch64 architectures syntax:

  1. Register names:
    • x86_64: prefixed by %
    • Aarch64: not prefixed
  2. Immediate values:
    • x86_64: prefixed by $
    • Aarch64: not prefixed
  3. Functions:
    • x86_64:
    • div %r13 – calculates the quotient and remainder.  It takes one register argument (%r13), and divides value in rax from it, storing the quotient in rax, and remainder in rdx

    • Aarch64: two operator functions are required for this.
    • udiv x6,x3,x4 – divides x3 by x4 and stores quotient into x6
      msub x7,x4,x6,x3 – subtracts x3 by (x4 * x6) and stores remainder in x7

  4. Arguments:
    • x86_64: Data sources are given as first argument
    • Aarch64: Destination is given as first argument

Conclusion

Wrapping my head around how the different registers work, memory operations and overall syntax was difficult, but once I had a look at the disassembly, I was able to get a better view of what was going on in memory after analyzing each executed line of code.  After that, translating the syntax from one architecture to another was relatively simple, at least for what was required for generating this output.

More useful links:
ARM Instruction Set Overview

Advertisements

Setting up Vagrant, Brackets and Thimble and choosing a bug to work on

Installing on Linux VM:

My first attempt of installing Vagrant was on a Fedora 25 virtual machine.  I was able to get it running using the fedora/24-cloud-base box and changing a few config settings in the Vagrantfile (following suggestions from this issue), but wasn’t able to get it working within the thimble.mozilla.org project.  Even after changing the project’s Vagrantfile config settings to the one that I had working, keeping the project’s original config settings and installing some of the libvirt dependencies, ‘vagrant up’ would still fail. (EDIT: after leaving the VM for a couple hours then running vagrant up again, it suddenly worked.. still not sure how or why. There are several issues opened in GitHub regarding this, but leaving my computer for a few hours and coming back to it seems to have worked for now..)

Installing Vagrant on Windows:

Since I was having issues with the installation on my Linux VM, I decided instead to try the installation on my main OS running Windows 10 with Oracle’s Virtual box already installed.  I figured I could use my MinGW(linux-like terminal) with ConEmu already set up and run vagrant from here. Git bash can also be used which I also have included in ConEmu setup.  The installation process on Windows went smoothly, following the steps provided on the vagrantup website.

Installing Brackets & Thimble:

I continued to follow the setup instructions here, cloning both the Bramble and Thimble projects from GitHub and was able to run Vagrant successfully from the Thimble project directory.

Here is a screenshot of Bramble up and running:

bramble_windows_running

and Thimble:

thimble_running

Thimble bug:

From the list of good first bugs, I decided to work on a bug regarding the ‘last edited field’ in Issue #719.  I will follow up on this post with my progress working on this bug.

Contributing to open source project – npm mysql packages.json

I chose the npm mysql package to contribute to. In this post, I will go over the steps I took for contributing a minor fix to an open source project, using a Linux terminal, Vi editor and Git.

Review Process

Looking at the package.json file, I noticed that the repository field URL did not contain an absolute URL path.  So I ran the file through the package-json-validator:

package-json_result

The validator also recommends to include the “keywords” and “bugs” fields.  I read over the official npm documentation to see how to properly write these fields in package.json.

Repository:

"repository" :
  { "type" : "git",
    "url" : "https://github.com/npm/npm.git"
  }

Keywords: An array of strings to help developers find the package using ‘npm search’.

Bugs:

{
  "url" : "https://github.com/owner/project/issues",
  "email" : "project@hostname.com"
}

I can adjust the fields now to match these formats.

Adding my changes

  • Fork repository from https://github.com/mysqljs/mysql
  • Clone project to my local workspace: git clone git@github.com:lkisac/mysql.git
  • I used Vim editor to edit the package.json file: vi packages.json
  • Adjust repository URL to:
    "repository": {
      "type": "git",
      "url": "http://github.com/mysqljs/mysql"
    },
    
  • Add change to git staging: git add package.json
  • I thought it might be a good idea to keep this change separate from the “keywords + bugs” fields change for the sake of having separate commit messages for each, so I committed this change first: git commit -m "replaced repository url with valid repository url and type"

  • Then back to editing package.json in vi to add the “keywords” and “bugs” fields:
"keywords": [
  "mysql",
  "sql",
  "database",
  "query"
],
"bugs": "http://github.com/mysqljs/mysql/issues",

Since I only included the issues URL in the bugs field, I followed the suggestion in npm of having it as only a single string instead of an object.

  • Add change: git add package.json
  • Commit change: git commit -m "added keywords + bugs url"
  • View changes in commits before pushing to repo: git show

After running git show, I noticed the indentation was off (4 spaces instead of 2) on the “sql” keyword line.  I had already set my tabwidth setting to 2, but found out that this setting does not insert spaces in replacement of a tab character. To do this you have to set the shiftwidth and set expandtab. So I added these two lines to my ~/.vimrc file (as root user, or in Fedora /etc/vimrc):


:set shiftwidth=2
:set expandtab

I switched back to my regular user and opened the package.json file again in vi.  If I press enter now from the “mysql” line, vi now automatically inserts 2 spaces at the beginning of the line.  To double check this, after hitting enter you can backspace and notice the cursor now moves back one space instead of the tabwidth’s set number of spaces.

An alternate quick fix for this of course would be to use a regular expression substitution in vi on lines 13 to 23:

:13,23s/\t/  /g

This will replace all tab characters with two spaces from lines 13 to 23.

Continuing with my changes… now I save these changes and exit vi CTRL + ZZ

  • See if changes are correct: git diff package.json
  • Spaces look good now, so I can add my changes: git add package.json
  • Commit changes to git: git commit -m "fixed vi indent to 2 spaces for sql keyword"

Now looking at the last three commits I made, I figured since the latest commit was only a minor fix to the previous, I wanted to have the two as only one commit.

git log --max-count 3

git_last_3_commits_combine

You can do this with a “git rebase”:

git rebase --interactive HEAD~2

This opens the git rebase file:

git_rebase_start

Here I want to pick the “added keywords + bugs url” as the main commit, and squash the “fixed vi indent…” commit into the main commit.

git_rebase_squash

Now I can hit Shift + ZZ to save my changes and exit git rebase.

Git now displays the combination of commit messages:

git_rebase_commit_msg

Since I want to only use the “added keywords + bugs url” commit message, I can delete the 2nd commit message.

git_rebase_new_commit_msg

Hit Shift + ZZ again to save the changes and exit.

Now the “fixed vi indent…” commit has been squashed with the “added keywords + bugs url” commit as one commit.

git_final_squash_log

Now I can push to the remote repository:
git push

One thing to note, as mentioned in github regarding git rebase, if the commits you squashed locally were previously already pushed to your remote repository, you would have to run:

git push origin branch-name --force

Although you have to be careful when choosing to do a rebase on already pushed commits and make sure they have not been reviewed or used in any way.  Use –force only only for very recent commits.

Creating Pull Request

Create the pull request by clicking the ‘Create pull request’ button in GitHub, then write a description of the changes you’ve made and create the new pull request.

pull_request_checks

Pull Request Review

One of the collaborators were quick to respond within 3 minutes of the pull request.  It was noted that npm automatically populates these fields now so adjusting the packages.json file is not necessary.  The new npm also no longer uses the keywords array in the search and the validator URL is also valid as a shortcut URL for git.

So I ended up creating a new issue for the validator-tool and the maintainer confirmed that the validator is slightly outdated and said they would be looking into updating it when they had the chance.

cflow & cadvisor – reviewing successful patches for open source projects

cflow

cflow has one maintainer who is also the developer for the project.  It is still currently an active project which uses a mailing list for notifying of bug reports and suggestions.

I had a look at the most recent patch from November 2016 regarding inverted trees missing the static function calls in the flow chart list.  The review process was seemingly quick and this method of review (mailing list, contrary to the workflow directly within GitHub) worked well in this case as the maintainer was able to review and apply the change successfully within a couple of weeks.  Although the workflow is not done directly on GitHub, the maintainer has made the project available from a Git repository as well as in CVS for contributors to work on the code.  The project’s official website also includes some well documented usage for Git.

Using GIT at Savannah: http://savannah.gnu.org/maintenance/UsingGit/

cadvisor

For the cadvisor open source GitHub project, I had a look at the review process for the ‘Build & test cAdvisor with go 1.7.1 #1508’ pull request from October 19, 2016.  It addresses an issue that was opened August 22 of that year, regarding the release of cAdvisor v0.24 with go version 1.7.  Responses to the request were made the very same day by active participants in the project.  It was then reviewed by one of the maintainers on November 2, 2016, a few things were discussed regarding builds on a specific environment, and the pull request was finally merged on December 5, 2016.

git_merge_tests

Github is a great way for developers to share and contribute their ideas and suggestions to the open source community.  The review process for pull requests and merging those changes is safe because it requires all necessary tests to pass successfully before being able to merge them into the project.  I also found that although GitHub is the most used version control system for open source projects, the use of mailing lists (i.e. the cflow project), can work just as well if the scope of the project is not too large and there is only one or a few active maintainers.

Build process for cflow & cadvisor Linux open source projects

I chose the cflow and cadvisor Linux open source projects for documenting the package build & installation process and listed some of the things I encountered along the way.  Both package installations were done on a Fedora 25 virtual machine.

cflow

An open source project licensed by GNU that charts control flow within source code.

Build & install steps:

  1. Download cflow-1.4.tar.bz2 from http://directory.fsf.org/wiki/Cflow ‘Download’ link
  2. Unpack tar file:
    tar xvf cflow-1.4.tar.bz2
  3. Change to install directory: cd cflow-1.4
  4. Create make files:
    ./configure
  5. Compile package:
    make
  6. Switch user to root:
    su root
  7. Install programs/data files/documentation:
    make install
  8. Verify that installation completed correctly: make installcheck

All 21 tests were successful.

Testing newly installed cflow software:

I tested cflow with the whoami.c sample file from the cflow manual: https://www.gnu.org/software/cflow/manual/cflow.html#Quick-Start

Run:
cflow whoami.c

Output:

main() :
    fprintf()
    who_am_i() :
        getpwuid()
        geteuid()
        getenv()
        fprintf()
        printf()

cflow package installation was successful.  No extra dependencies were required during the installation process.  I tried this software with C++ code and it also works very well.

cadvisor

This project is licensed under the Apache License Version 2.0 and provides resource usage and performance characteristics for running containers.  It also has native support for Docker, an open source project for containerization.  Here is a previous blog of mine on Docker.

Github open source code link: https://github.com/google/cadvisor

Installing required dependencies:

GO language – open-source programming language initially developed by Google.  Installation instructions on Linux can be found here: http://ask.xmodulo.com/install-go-language-linux.html (instructions for both Ubuntu and Fedora are included).  Total download size is approximately 49M.

Once GO language is installed (cAdvisor requires Go 1.6+ to build), I followed the build & testing instructions here: https://github.com/google/cadvisor/blob/master/docs/development/build.md

At this time, I have installed go version 1.7.4 linux/amd64.

Issues I encountered:

After running ‘make build’ from the $GOPATH/src/github.com/google/cadvisor path, the cadvisor build failed:

building assets
building binaries
building cadvisor
/usr/lib/golang/pkg/tool/linux_amd64/link: running gcc failed: exit status 1
/usr/bin/ld: cannot find -lpthread
/usr/bin/ld: cannot find -ldl
/usr/bin/ld: cannot find -lc
collect2: error: ld returned 1 exit status
Makefile:38: recipe for target 'build' failed
make: *** [build] Error 2

I tried running ‘make test’ for only unit tests – all existing test files passed ok.  I’ve tested gcc with another c program using the -lpthread argument and it works fine.  So I investigated further to figure out what the issue was.

Steps I took to resolve the issue:

The Linux version I’m running is 4.8.6-300.fc25.x86_64 GNU/Linux (‘uname -or‘), running on a Fedora release 25 (‘cat /etc/fedora-release‘).

After searching online for ‘/usr/lib/golang/pkg/tool/linux_amd64/link: running gcc failed’, I came across this github issue https://github.com/golang/go/issues/13114.  It mentions here that the GOROOT environment variable being set can cause issues on any system other than Windows.

So I checked if GOROOT was set:

echo $GOROOT

/usr/lib/golang

And unset it:

unset GOROOT

Running ‘make build’ again still fails, so it did not fix the issue.

From the Makefile at line 38:

build: assets
@echo ">> building binaries"
@./build/build.sh

It seems that the issue has to do with the c compiler linking, which is why I thought unsetting GOROOT might fix it.  After more digging, I found the fix for this issue here https://github.com/kubernetes/minikube/issues/585

The fix for this was installing glibc-static on my system:

dnf install @development-tools
dnf install glibc-static

Build is now successful.
make build

building assets
building binaries
building cadvisor

Also worth noting, after installing glibc-static, I set GOROOT back to the original /usr/lib/golang location set GOROOT=/usr/lib/golang and the ‘make build’ still worked.

Docker – application container engine

This is my first blog for OSD600 (Open Source Development) at Seneca.  As one of my first tasks for analyzing an open source project, I’ve chosen the docker project (https://github.com/docker) which currently has 100 contributors with just under 2000 open issues being worked on by the GitHub community.

Docker is an application container engine that is hardware/platform independent, meaning it is capable of packing, shipping and running any application on any type of hardware or platform as a “lightweight” container (standardized unit); no particular language, framework, or packaging system is required.

Containers are considered a better/faster distribution method compared to setting up a virtual machine, since the container does not require a full copy of the operating system, giving it a much faster startup time (minutes to seconds).  Here is an interesting thread on Quora (https://www.quora.com/What-is-the-difference-between-containerization-Docker-and-virtualization-VMWare-VirtualBox-Xen) detailing some of the advantages of containers over VMs and also simply going over some of the main differences between the two.

Docker was released as open source in March of 2013.  The official website can be found here (https://www.docker.com).

One of the major milestones for this project was on September 19, 2013 when Docker announced their major alliance with RedHat which made it possible for Fedora/RHEL compatibility, as well as Docker becoming a standard container within Red Hat OpenShift.  Another major open source project and current leading open source automation server Jenkins (https://jenkins.io/) has already had many plugins developed by the community dedicated to Docker compatibility (https://jenkins.io/solutions/docker/).

Software Portability & Optimization

This section of my blog will include all topics related to my SPO600 course at Seneca.

Blog topics will include:

  • Open source code review
  • Assembly Language
  • Compiler options & optimization
  • Algorithm Selection
  • Computer Architectures
  • Course project stages: I, II, & III