Linkerd example for consul


tags: [consul, linkerd]

This post looks at how you can configure linkerd to use consul as a service discovery backend

Part of a series on linkerd:
* Part one linkerd and consul

Sample overview

The following components make up the sample system:
* curl which acts as our client application
* linkerd for proxying requests to our service
* audit example service which has a /health endpoint
* consul as our service discovery back-end
* consul-registrator
to automatically registers services with consul

System overview

+--------+      +---------+    +-----------------+
| client +----> | linkerd +--> | service (audit) |
+--------+      +----^----+    +-------+---------+
                     |                 |
                +----+---+     +-------v------------+
                | consul <-----+ consul registrator |
                +--------+     +--------------------+

1. Look up a consul service by path

The sample code for this can be found here: https://github.com/ewilde/linkerd-examples/tree/master/post-1

curl -H "Host: api.company.com" http://localhost:4140/audit/health -i

Should look up the service named audit in the consul catalog and call the service with GET /health

Linkerd configuration

namers:
- kind: io.l5d.consul
  includeTag: false
  useHealthCheck: false
routers:
- protocol: http 
  label: /http-consul
  identifier:
   kind: io.l5d.path
   segments: 1
   consume: true
  dtab: |
    /svc => /#/io.l5d.consul/dc1;
  servers:
  - port: 4140
    ip: 0.0.0.0

2. Look up a consul service by subdomain

curl -H "Host: audit.company.com" http://localhost:4140/health -i

Should look up the service named audit in the consul catalog and call the service with GET /health

Linkerd configuration

namers:
- kind: io.l5d.consul
  includeTag: false
  useHealthCheck: false

routers:
- protocol: http
  label: /host/http-consul
  identifier:
   kind: io.l5d.header.token
  dtab: |
    /consul  => /#/io.l5d.consul/dc1;
    /svc     => /$/io.buoyant.http.subdomainOfPfx/company.com/consul;
  servers:
  - port: 4140
    ip: 0.0.0.0

Written with StackEdit.

Advertisements
Tagged ,

Consul startup using systemd on ubuntu


tags: [consul, systemd, ubuntu]

At home I use Ubuntu, consul and (vault)[https://vaultproject.io] quite a bit, here is how I get consul to startup when my computer boots up using systemd

#!/usr/bin/env bash
set -e

echo "Installing dependencies..."
if [ -x "$(command -v apt-get)" ]; then
  sudo apt-get update -y
  sudo apt-get install -y unzip
else
  sudo yum update -y
  sudo yum install -y unzip wget
fi


echo "Fetching Consul..."
CONSUL=0.7.5
cd /tmp
wget https://releases.hashicorp.com/consul/${CONSUL}/consul_${CONSUL}_linux_amd64.zip -O consul.zip
wget https://github.com/hashicorp/consul/blob/master/terraform/shared/scripts/rhel_consul.service -O consul.service

echo "Installing Consul..."
unzip consul.zip >/dev/null
chmod +x consul
sudo mv consul /usr/local/bin/consul
sudo mkdir -p /opt/consul/data


# Write the flags to a temporary file
cat >/tmp/consul_flags << EOF
CONSUL_FLAGS="-server -bind=192.168.1.97 -ui -data-dir=/opt/consul/data -bootstrap-expect 1"
EOF

if [ -f /tmp/upstart.conf ];
then
  echo "Installing Upstart service..."
  sudo mkdir -p /etc/consul.d
  sudo mkdir -p /etc/service
  sudo chown root:root /tmp/upstart.conf
  sudo mv /tmp/upstart.conf /etc/init/consul.conf
  sudo chmod 0644 /etc/init/consul.conf
  sudo mv /tmp/consul_flags /etc/service/consul
  sudo chmod 0644 /etc/service/consul
else
  echo "Installing Systemd service..."
  sudo mkdir -p /etc/systemd/system/consul.d
  sudo chown root:root /tmp/consul.service
  sudo mv /tmp/consul.service /etc/systemd/system/consul.service
  sudo chmod 0644 /etc/systemd/system/consul.service
  sudo mv /tmp/consul_flags /etc/default/consul
  sudo chown root:root /etc/default/consul
  sudo chmod 0644 /etc/default/consul
fi

I adapted this script from https://github.com/hashicorp/consul/blob/master/terraform/shared/scripts/install.sh

Written with StackEdit.

Tagged , ,

Install Kong api gateway from source on macosx


tags: [kong, mac, macosx]

I wasted quite a bit of time today figuring out how to compile kong on my mac, so here it is:

Install openresty

brew update
brew install pcre openssl openresty

Install lua and luarocks

Kong is compiled against lua 5.1

curl -R -O https://www.lua.org/ftp/lua-5.1.5.tar.gz
tar zxf lua-5.1.5.tar.gz
cd lua-5.1.5
make macosx
sudo make install

Luarocks is a package manager that kong uses

git clone git@github.com:luarocks/luarocks.git
cd luarocks
./configure
make install

Compile kong

$ git clone git@github.com:Mashape/kong.git
$ sudo make install
Tagged , ,

Fish function to switch aws profiles


tags: [fish, aws, stackedit]

Note: I’m sure there are better ways of doing this!

I wanted a convenient way to switch aws command line environment variables based on my desired profile

Below are the environment variables a wanted to swap based on configured profiles

Name Description
AWS_ACCESS_KEY_ID AWS access key.
AWS_SECRET_ACCESS_KEY AWS secret key. Access and secret key variables override credentials stored in credential and config files.
AWS_DEFAULT_REGION AWS region. This variable overrides the default region of the in-use profile, if set.

Creating a profile

You can configure your credentials profiles in ~/.aws/credentials.

[default]
aws_access_key_id=AKIAIOSFODNN7123456890
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCY123456890

[ed]
aws_access_key_id=AKIAI44QH8DHB123456890
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvb123456890

And your other settings in ~/.aws/config

[default]
output = json
region = us-east-1

[profile ed]
output = json
region = eu-west-1

> Note you can create these by hand or use `aws config –profile’ which is easier

Creating a fish function

The function below is really simple and uses aws config get to set the appropriate environment variable for the profile you have selected

 function aws-profile -- 'Switch aws profile'
     set -gx AWS_ACCESS_KEY_ID (aws configure get --profile $argv aws_access_key_id)
      set -gx AWS_SECRET_ACCESS_KEY (aws configure get --profile $argv aws_secret_access_key)
      set -gx AWS_DEFAULT_REGION (aws configure get --profile $argv region)
      echo Profile switched to $argv
      echo  AWS_ACCESS_KEY_ID $AWS_ACCESS_KEY_ID
      echo  AWS_SECRET_ACCESS_KEY $AWS_SECRET_ACCESS_KEY
      echo  AWS_DEFAULT_REGION $AWS_DEFAULT_REGION
end

Example

$ aws-profile ed

Profile switched to ed
AWS_ACCESS_KEY_ID AKIAI44QH8DHB123456890
AWS_SECRET_ACCESS_KEY je7MtGbClwBF/2Zp9Utk/h3yCo8nvb123456890
AWS_DEFAULT_REGION eu-west-1
Tagged , ,

Creating an oauth2 custom lamda authorizer for use with Amazons (AWS) API Gateway using Hydra

Summary

This article explains how to create an oauth2 custom authorizer for amazon’s AWS API Gateway.

I wanted to use the oauth2 client credentials grant, also known as 2-legged oauth 2 workflow, see: http://oauthbible.com/#oauth-2-two-legged. This kind of workflow is useful for machine to machine communication, where the client machine is also the resource owner.

     +---------+                                  +---------------+
     |         |                                  |               |
     |         |>--(A)- Client Authentication --->| Authorization |
     | Client  |                                  |     Server    |
     |         |<--(B)---- Access Token ---------<|               |
     |         |                                  |               |
     +---------+                                  +---------------+

                     Figure 6: Client Credentials Flow

Ref: https://tools.ietf.org/html/draft-ietf-oauth-v2-31#section-4.4

If you struggle to workout which grant type use, this diagram can be useful:

enter image description here
Ref: https://oauth2.thephpleague.com/authorization-server/which-grant/

To implement oauth in api gateway we need to carry out the following tasks, which are covered in detail later on:

  1. Setup an oauth server
  2. Create a custom authorizer
  3. Configure the API gateway

Setup and configure an oauth server

The first task was to evaluate what software I could use to act as an authorization and resource server. In order that the custom lambda authorizer could validate a token, I needed an implementation to expose a token validation endpoint as well as the normal token creation endpoint.

Below is a list of candidates I looked at

Software Language Description
Hydra Go Opensource. Good documentation. Responsive maintainer(s), PR merged same day. API for token validation
PHP OAuth 2.0 Server PHP Opensource. Good documentation. Unsure if it can validate tokens via an api?
Spring Security OAuth Java Opensource. Good documentation. API for token validation

I also took a quick look at some other implementations, see: https://oauth.net/code/. In this article I choose to use hydra mainly because i’m familiar with Go, it supported token verification using an api and looked really straight forward to setup and configure

Setting up hydra

I ran hydra using the published docker image: https://hub.docker.com/r/oryam/hydra/

docker run -d --name hydra \
    -p 4444:4444 \
    -e SYSTEM_SECRET='3bu>TMTNQzMvUtFrtrpJEMsErKo?gVuW' \
    -e FORCE_ROOT_CLIENT_CREDENTIALS='8c97eaed-f270-4b2f-9930-03f85160612a:MxGdwYBLZw7qFkUKCFQUeNyvher@jpC]' \
    -e HTTPS_TLS_CERT_PATH=/server.crt \
    -e HTTPS_TLS_KEY_PATH=/key.pem \
    -v $(pwd)/server.crt:/server.crt \
    -v $(pwd)/key.pem:/key.pem oryam/hydra

This sets up hydra to use ssl and seeds the root credentials, which is used later on perform administrative tasks with hydra

Self-sign ssl certificate

The api gateway lamda authentication function will need to communicate with the hydra. I choose to secure this communication using SSL/TLS. If you don’t have an SSL certificate for your hydra instance, you could buy one or you can create your own self-signed certificate (for internal usage or test purposes). I choose to go down the self-signed root.

  1. Create a private key: openssl genrsa 2048 &gt; key.pem
  2. Create a signing request: openssl req -new -key key.pem -out cert.csr
    Example answers:
    Country Name (2 letter code) [AU]:GB
    State or Province Name (full name) [Some-State]:London
    Locality Name (eg, city) []:London
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company Limited
    Organizational Unit Name (eg, section) []:
    Common Name (e.g. server FQDN or YOUR name) []:oauth.mycompany.local
    Email Address []:
  3. Sign the request: openssl x509 -req -days 3650 -in cert.csr -signkey key.pem -out server.crt

You can now use key.pem and server.crt to run the docker container as above.

Configuring hydra

  1. Create a system token
  2. Create a client(s)
  3. Assign policies to a client (optional)
  4. Test creating a client token
  5. Test validating a client token
  6. Test validating a client token against a policy
  7. Health check endpoint
Create a system token

System token is used to perform administrative interactions with hydra, such as creating clients, validating tokens etc. In the -u (username:password) this is the system client id and secret set in the FORCE_ROOT_CLIENT_CREDENTIALS environment variable

Request

curl -k -X POST \ 
    -d grant_type=client_credentials \
    -d scope='hydra hydra.clients' \
    https://oauth.mycompany.local/oauth2/token

Result

{
  "access_token": "fIyy-W3j2cmNSP40GK9HmQ9wlmhzFpdcxia64JHN3po.ww3Ob46pPaj1tz_XfXG80BAnLy5XbwuLqSjmwnqh6Ks",
  "expires_in": 3599,
  "scope": "hydra hydra.clients",
  "token_type": "bearer"
}
Create a client

Create client, this is a user of your api

Request

curl -k -X POST \
    -H 'Authorization: bearer fIyy-W3j2cmNSP40GK9HmQ9wlmhzFpdcxia64JHN3po.ww3Ob46pPaj1tz_XfXG80BAnLy5XbwuLqSjmwnqh6Ks' \
    -d '{"id":"3094A219-52B1-4900-91F7-514C4392D8C3","client_name":"Client1","grant_types":["client_credentials"],"response_types":["code"],"public":false}' \
    https://oauth.mycompany.local/clients 

Note: In the request authorization header we use the access_token we obtained from the previous step. We also specify the client will access the system using the client_credentials grant.

Result

{
  "id": "3094A219-52B1-4900-91F7-514C4392D8C3",
  "client_name": "Client1",
  "client_secret": "(SDk!*ximQS*",
  "redirect_uris": null,
  "grant_types": [
    "client_credentials"
  ],
  "response_types": [
    "code"
  ],
  "scope": "",
  "owner": "",
  "policy_uri": "",
  "tos_uri": "",
  "client_uri": "",
  "logo_uri": "",
  "contacts": null,
  "public": false
}

Note: The client_secret has been generated. client_id and client_secret are used by the client to (create tokens)[]

Create a policy

In our example we are creating two types of clients, read only and write clients. The curl command below defines the read policy and associates it with subjects, which in the context of hydra, are a comma separated list of client ids.

Request

curl -k -X POST -H \
    'Authorization: bearer fIyy-W3j2cmNSP40GK9HmQ9wlmhzFpdcxia64JHN3po.ww3Ob46pPaj1tz_XfXG80BAnLy5XbwuLqSjmwnqh6Ks' \
    -d '{"description":"Api readonly policy.","subjects":["3094A219-52B1-4900-91F7-514C4392D8C3"],"actions":["read"],"effect":"allow","resources":["resources:orders:<.*>"]}'  \
https://oauth.mycompany.local/policies

Note: In the above request we specify which resources the policy applies to. In our example we are specifying that the orders resource is having a policy applied. The “ is a wild card identifier which applies the policy to orders and any sub resources.

Response

{
  "id": "03c59a92-1fa6-4df9-ad1e-e5d551bc2c71",
  "description": "Api readonly policy.",
  "subjects": [
    "3094A219-52B1-4900-91F7-514C4392D8C3"
  ],
  "effect": "allow",
  "resources": [
    "resources:orders:<.*>"
  ],
  "actions": [
    "read"
  ],
  "conditions": {}
}
Create a client token

This call is issued by the client application

Request

curl -k -X POST \
    -d grant_type=client_credentials \
    -u '3094A219-52B1-4900-91F7-514C4392D8C3:(SDk!*ximQS*' \
    https://oauth.mycompany.local/oauth2/token

Result

{
  "access_token": "1z4Bb_r8lgmUKaD1FyOgP0tBJ_UIafhX2-QyIvUgLN8.NHdZ3zm4Ly6mepP7flGJQMN6-YfKox3OyPPZiiMg-mk",
  "expires_in": 3599,
  "scope": "",
  "token_type": "bearer"
}

Test validating a client token

This is the call that the lambda function will need to make to validate a client token

Request

curl -k -X POST \
    -H 'Authorization: bearer fIyy-W3j2cmNSP40GK9HmQ9wlmhzFpdcxia64JHN3po.ww3Ob46pPaj1tz_XfXG80BAnLy5XbwuLqSjmwnqh6Ks' \
    -d 'token=1z4Bb_r8lgmUKaD1FyOgP0tBJ_UIafhX2-QyIvUgLN8.NHdZ3zm4Ly6mepP7flGJQMN6-YfKox3OyPPZiiMg-mk' \
    https://oauth.mycompany.local/oauth2/introspect

Response

{"active":true,"client_id":"Client1","sub":"Client1","exp":1481975503,"iat":1481971902,"aud":"Client1"}

Test validating a client token against a policy

This call could be used by the lambda function as an alternative, perhaps mapping the http verb to the policy, i.e GET=read or POST=write

Request

curl -X POST -k \
    -H 'Authorization: bearer fIyy-W3j2cmNSP40GK9HmQ9wlmhzFpdcxia64JHN3po.ww3Ob46pPaj1tz_XfXG80BAnLy5XbwuLqSjmwnqh6Ks' \
    -d '{"token":"1z4Bb_r8lgmUKaD1FyOgP0tBJ_UIafhX2-QyIvUgLN8.NHdZ3zm4Ly6mepP7flGJQMN6-YfKox3OyPPZiiMg-mk","subject":"Client1","action":"read","resource":"resources:orders:123"}'  https://oauth.mycompany.local/warden/token/allowed

Response – Allowed

{"sub":"Client1","scopes":[],"iss":"hydra.localhost","aud":"Client1","iat":"2016-12-17T10:51:42.917937398Z","exp":"2016-12-17T11:51:43.049266177Z","ext":null,"allowed":true}

Health check endpoint

This call is useful for a loadbalancer ALB or ELB to determine if a node is active or not.

Request

curl -k https://oauth.mycompany.local/health -i

Response
Note that there is no content with this response, which is why I included the -i curl parameter to show the response code

HTTP/1.1 204 No Content
Date: Mon, 19 Dec 2016 10:04:59 GMT

Create a custom authorizer

Next up we examine how to create the lambda function to call our hydra server. The code for this authorizer can be found on github: https://github.com/ewilde/oauth2-api-gateway. I used the serverless framework to help me build and deploy the authorizer and test endpoint

Note: This is the first node application i’ve written, so apologies if it’s not very idiomatic.

serverless.yml

functions:
hello:
handler: functions/handler.hello
events:
– http:
path: hello
authorizer: auth
method: get
vpc:
securityGroupIds:
– sg-575c752a
subnetIds:
– subnet-35561a7c
– subnet-4e44d315
– subnet-454bd668

auth.js

I wrote a simple javascript library to interact with hydra which we instantiate here to use later on when validating an incoming token.

var HydraClient = require('./hydra');
var client = new HydraClient();

Below is the function to create the policy document to return to the api gateway when a client presents a valid token

const generatePolicy = (principalId, effect, resource) => {
    const authResponse = {};
    authResponse.principalId = principalId;
    if (effect && resource) {
        const policyDocument = {};
        policyDocument.Version = '2012-10-17';
        policyDocument.Statement = [];
        const statementOne = {};
        statementOne.Action = 'execute-api:Invoke';
        statementOne.Effect = effect;
        statementOne.Resource = resource;
        policyDocument.Statement[0] = statementOne;
        authResponse.policyDocument = policyDocument;
    }
    return authResponse;
};

Below is the actual authorization method that is called by the api gateway. It validates the incoming request and returns either:

  • Error: Invalid token
  • Unathorized
  • Success: policy document
module.exports.auth = (event, context) => {
    client.validateTokenAsync({
        'access_token': event.authorizationToken
    }, function (result) {
        if (result == null) {
            console.log(event.authorizationToken + ': did not get a result back from token validation');
            context.fail('Error: Invalid token');
        } else if (!result.active) {
            console.log(event.authorizationToken + ': token no longer active');
            context.fail('Unauthorized');
        } else {
            console.log(event.authorizationToken + ': token is active will allow.');
            console.log('principle: ' + result.client_id + ' methodArm: ' + event.methodArn);
            var policy = generatePolicy('user|' + result.client_id, 'allow', event.methodArn);
            console.log('policy: ' + JSON.stringify(policy));
            context.succeed(policy);
        }
    });
};

hydra client
The function below calls hydra to make sure the token is valid and that the TTL has not expired

function validateToken(systemToken, clientToken, callback) {
    var tokenParsed = clientToken.access_token.replace('bearer ', '');
    console.log('Validating client token:' + tokenParsed);

    var request = require('request');
    request.post(
        {
            url: constants.base_auth_url + '/oauth2/introspect',
            agentOptions: {
                ca: constants.self_signed_cert
            },
            headers: {
                'Authorization' : 'bearer ' + systemToken.access_token
            },
            form: {
                token: tokenParsed
            }
        },
        function (error, response, body) {
            if (!error && response.statusCode >= 200 && response.statusCode < 300) {
                var result = JSON.parse(body);
                callback(result);
            }
            else {
                console.log(response);
                console.log(body);
                console.log(error);
                callback(null);
            }
        });
}

##Configuring the API gateway and testing the application
Because we used serverless in this example there is really nothing to be done here. The serverless.yml configures the authorizer:

authorizer: auth for the endpoint /hello

To test the application:

  1. Deploy the serverless application `serverless deploy’
  2. Create a token
  3. In postman make a call to ‘/hello’ passing in your token in the Authorization header

Useful references

Podcasts

Alex Bilbie – OAuth 2 and API Security
Covers different grant types and what they’re each appropriate for, as well as discuss some potential API security strategies for one of Adam’s personal projects.
http://www.fullstackradio.com/4

Thought machine – API Gateway and lambda
Interesting discussion on lambda architectures
http://martinfowler.com/articles/serverless.html

Interview with Mike Roberts discussing serverless architectures
https://softwareengineeringdaily.com/2016/08/23/serverless-architecture-with-mike-roberts/

crane assemble: adding builds to existing projects

The crane assemble command see full docs allows you to add a fully featured build script to you existing project. The video below shows this in action:

For more information visit the crane docs or check us out on github

Tagged ,

Bootstrapping your new project using crane

Get the code

Visit us on github https://github.com/ewilde/crane

Or install the app

crane-5

But what does it do?

crane is a command line tool the I developed with Kevin Holditch. It kick starts development of a new project by templating the boring bits.

crane init ServiceStack.Plugin

Running this command creates the following items

  • Solution file
  • Project
  • Unit test project
    • Example test based on xbehave
  • VERSION.txt
  • Nuspec file for project
  • Build script
    • Downloads nuget.exe if missing
    • Performs nuget restore on solution
    • Builds solution in debug or release mode
    • Runs unit tests
    • Updates assembly info with correct version numbers
    • Packages project into nuget package
    • Publishes to nuget repository

In action

First initialize a new project using the syntax crane init {project name}
crane-1

It creates a directory using the the project name given in the init command
crane-2

You can immediately build the project. Just run .\build.ps1 from the project directory
crane-3

Here’s what it looks like in Visual Studio if you open the solution file:
crane-4

adb (android debug bridge) not showing device using Moto G and Android 4.3

I’ve been developing Android applications for awhile now. However not owning an actual device I’ve only ever used the emulator. Recently I shelled out for a Moto G and found it wasn’t obvious how I could get it hooked up to my mac.

This was a case of RTFM (http://developer.android.com/tools/device.html), step 2 eluded me until I stumbled across it on a stackoverflow post..:

To enable USB debugging:

1. Launch the settings application -> about

Image

2. Click build number 7 times

Screenshot_2013-12-28-18-23-19

3. Developer options should now be available from the main settings menu:

Screenshot_2013-12-28-18-23-15-2

4. Enable USB debugging

Screenshot_2013-12-28-18-27-41

Tagged , ,

How does Xamarin.IOS aka monotouch work?

Xamarin is a software development framework that allows developers to build applications for iOS and Android platforms using c# and the .Net framework. The SDK has separate requirements for developing iOS and Android application. The part of the SDK targeting iOS development is referred to as Xamarin.iOS or monotouch (the original name of the project)

Requirements for developing iOS applications using Xamarin.iOS
Apple Macintosh Computer Running OSX Lion or greater (10.7 >)
Apple Developer Program membership $99 per year, allows downloading of iOS SDK and publication of applications to the Apple app store
iOS SDK and Xcode Required during compilation, and optionally can be used to design graphical user interfaces using it’s inbuilt graphical designer
iOS Device Simulator Part of the SDK allows running of applications during the development process
Xamarin studio or Visual Studio Not strictly necessary, however does automate the build process
Knowledge of c# c# is the main language supported by Xamarin.iOS

Table 1 (Xamarin, Inc)

Mono is an open source implementation of the .NET Framework which can run across multiple operating systems, Windows, Linux and OSX. Mono is based on ECMA standards and is ABI (application binary interface) compatible with ECMA’s Common language infrastructure (CLI).

Xamarin.iOS compiles c# source code against a special subset of the mono framework. This cut down version of the mono framework includes additional libraries which allow access to iOS platform specific features. The Xamarin.iOS compiler, smsc, takes source code and compiles it into an intermediate language, ECMA CIL (common intermediate language), however it does not produce ECMA ABI compatible binaries unlike the normal mono compiler, gmcs or dmsc. This means any 3rd party .Net libraries you want to include in your application will need to be recompiled against the Xamarin.iOS subset of the mono framework using smsc.

Once a Xamarin.iOS application has been compiled into CIL it needs to be compiled again into native machine code that can run on an iOS device. This process is carried out by the SDK tool ‘mtouch’, the result of which is an application bundle that can be deployed to either the iOS simulator or an actual iOS device, such as an iPhone or iPad.

Diagram showing how monotouch aka xamarin.ios works?

Due to restrictions placed by Apple, the iOS kernel will not allow programs to generate code at runtime. This restriction has severe implications for software systems that run inside a virtual machine using just-in-time compilation. Just-in-time compilation takes the intermediate code, for example mono CIL and compiles it at runtime into machine code. This machine code is compatible for the device it is running on at the time of execution.

To work around this restriction the mtouch tool compiles the CIL ahead of time. A process that the mono team describe as AOT, ahead of time compilation. See: http://docs.xamarin.com/guides/ios/advanced_topics/limitations

Tagged , ,

.Net Interview Questions

General Questions

Explain what a process is?

In general a process consists of or ‘owns’ the following:

  • A program image to execute, in machine code (think exe on disk)
  • Memory, typically some block of virtual memory
    • Executable code
    • Data (input / output
    • Call stack
    • Heap, to hold intermediate data

General CLR Questions

1. Explain garbage collection in .Net?

Garbage collection will occur under one of the following conditions:

  • The system is running low on physical memory
  • The heap surpasses an acceptable threshold. (This threshold is continuously adjusted as the process runs)
  • GC.Collect is called

The managed heap

There is a managed heap for each managed process, the heap is initialized by the garbage collector. The garbage collector calls win32 VirtualAlloc to reserve memory and VirtualFree to release memory.

The heap is comprised of the large object heap (objects greater than 85k, normally only arrays) and the small object heap

Generations

The heap is split into generations to manage long-lived and short-lived objects. Garbage collection generally occurs with the reclamation of short-lived objects which normally account for a small portion of the heap

Generation 0: Contains short-lived objects, i.e. temporary variables. Collection occurs most here
Generation 1: Contains short-lived objects, is a buffer between 0 & 2 generations
Generation 2: Contains long-lived objects, i.e. static instances, stateful instances

Types of garbage collection

Workstation mode

More suitable for long-running desktop applications, adds support for concurrent garbage collection which should mean that the application is more responsive during a collection.

Server mode

Best suited for asp.net, only supported on multi-processor machines

References

MSDN: Garbage Collection

2. What is boxing / unboxing?

Boxing occurs when a value type is passed to a method which expects an object or a value type is implicitly cast to an object.

ArrayList x = new ArrayList();
x.Add(10); // Boxing
int x =  10;
Object y = x; // Boxing

Unboxing is the reverse of this process, taking an object and casting it back to the value type.

int x = 10;

Object y = x; // Boxing

x = (int) y; // Unboxing

Problem?

Yes there is a performance cost when an item is boxed a new item must be created and allocated on the heap, 20x as long as a simple reference assignment. 4x penalty for unboxing.

Now with generics some use cases for boxing/unboxing go away. However in silverlight/WPF value convertors and dependency objects can cause lots of boxing to occur

3. What is a struct, when should you use one?

A struct is a value type and should be choosen instead of class if:

  • It logically represents a single value
  • Has an instance size smaller than 16 bytes
  • It is immutable
  • It will not be boxed frequently

4. What are weak references, why do you need them?

Enables you to take out a reference to an object without stopping the garbage collector from reclaiming that object.

Useful if you have very large objects, which are easy to recreate.

5. What is the dispose pattern?

The dispose pattern is used only for objects that access unmanaged resources. The garbage collector is very efficient in reclaiming memory of managed objects but has no knowledge of memory used by unmanaged native objects.

6. What is the difference between a Dictionary<TKey, TValue> and Hashtable?

Dictionary<TKey, TValue> Hashtable
Minimizes boxing/unboxing boxes value types: Add(object,object)
Needs synchronization Provides some sychronization via Hashtable.Synchronized(Hashtable) method
Newer >.net 2.0 Older Since 1.0
If key not found throws KeyNotFoundException If key not found returns null

Note that internally dictionary is implemented as a hashtable.

7. What is the cost of looking up an item in a Hashtable?

Retrieving the value of a dictionary or hashtable using it’s key is very fast close to O(1) in big-o notation. The speed of retrieval depends on the quality of the hashing algorithm of the type specified for TKey

Multi-threading Questions

1. How would you engineer a deadlock

  • Create two methods each acquiring a separate lock, that call each other say 5 times
  • Start two threads on separate methods
class Program
{
    private static int operations = 5;

    public static object lockA = new object();
    public static object lockB = new object();

    static void Main(string[] args)
    {
        Thread thread1 = new Thread(DoSomethingA);
        Thread thread2 = new Thread(DoSomethingB);

        thread1.Start();
        thread2.Start();

        thread1.Join();
        thread2.Join();
        Console.WriteLine(operations);
        Console.ReadKey();
    }

    public static void DoSomethingA()
    {
        lock (lockA)
        {
            Console.WriteLine("Lock DoSomething A " + Thread.CurrentThread.ManagedThreadId);
            if (operations > 0)
            {
                operations = operations - 1;
                Thread.Sleep(100);
                DoSomethingB();
            }
        }

        Console.WriteLine("Release DoSomething A " + Thread.CurrentThread.ManagedThreadId);
    }

    public static void DoSomethingB()
    {

        lock (lockB)
        {
            Console.WriteLine("Lock DoSomething B " + Thread.CurrentThread.ManagedThreadId);
            if (operations > 0)
            {
                operations = operations - 1;
                Thread.Sleep(100);
                DoSomethingA();                    
            }
        }

        Console.WriteLine("Release DoSomething B " + Thread.CurrentThread.ManagedThreadId);
    }
}

2. What are race conditions, how to stop them

Occur when more than one thread attempts to update shared data:

int x = 10;

…..

// Thread 1

x = x – 10;

// Thread 2

x = x + 1

To stop race conditions from happening you need to obtain exclusive locks, use semaphor, mutex, readwriterslim lock mechanism

3. What are some lock-less techniques for avoiding race conditions?

You can use volatile or Thread.MemoryBarrier() or the Interlocked class

4. What is does the keyword Volatile mean or do?

It ensures that the value of the field is always the most up-to-date value. Commonly used in multi-threaded applications that do not use locks to serialize access to shared data. When using a lock it causes the most up-to-date value to be retrieved.

Values can become stale when threads run on different processors asynchronously.

5. What is differenct between ManualResetEvents and AutoResetEvents?

When signaled via ‘Set’ threads waiting can all proceed until Reset() is called. With auto reset event only one waiting thread is unblocked when signalled ‘Set’ and the wait handle goes back to blocking other waiting threads until the next ‘Set’ message is sent.

Reactive Extensions (RX)

1. What are the IObservable<T> and IObserver<T> interfaces

IObservable<T> is a collection of things that can watched and defines a provider for push-based notification. And must implement a subscribe method.

IObserver<T> is essentially the listener to the collection and needs to implement OnNext, OnError, OnCompleted

Collections

1. What is the difference between IEnumerable<T> and IEnumerator<T>

IEnumerable<T> is a thing which can be enumerated over. Returns an IEnumerator

IEnumberator<T> is the thing that can do the enumeration, knows how to navigate the collection

Software design

1. List some design patterns

Creational patterns

Abstract factory

Provide an interface for creating families of related or dependent objects without specifying their concrete classes

http://en.wikipedia.org/wiki/Abstract_factory_pattern

Builder Pattern

Defines abstract interfaces and concrete classes for building complex objects

http://en.wikipedia.org/wiki/Builder_pattern

Singleton Pattern

Ensure a class only has one instance, and to provide a global point to access it.

Structural Patterns

Façade Pattern

A facade is an object that provides a simplified interface to a larger body of code

Pasted from <http://en.wikipedia.org/wiki/Facade_pattern>

Decorator

Attach additional responsibilities to an object dynamically keeping the same interface. Decorators provide a flexible alternative to subclassing for extending functionality.

Pasted from <http://en.wikipedia.org/wiki/Design_pattern_(computer_science)>

2. What is SOLID?

Initial

Stands for(acronym) Concept
S SRP
Single responsibility principle
an object should have only a single responsibility.
O OCP
Open/closed principle
“software entities … should be open for extension, but closed for modification”.
L LSP
Liskov substitution principle
“objects in a program should be replaceable with instances of their subtypes without altering the correctness of that program”. See also design by contract.
I ISP
Interface segregation principle
“many client specific interfaces are better than one general purpose interface.”[5]
D DIP
Dependency inversion principle
one should “Depend upon Abstractions. Do not depend upon concretions.”[5]Dependency injection is one method of following this principle.
Tagged ,