First attempt at using Rust on RISC-V

Rust seems to becoming more and more popular these days. For a embedded C developer like myself, I became interested in what it can be used for in the embedded space.

Working for a company that heavily utilizes RISC-V I wanted to see how hard it was to get Rust running on this platform.

Setting up the project

I started off using cargo to build a new project.

cargo new riscv

That created the standard Rust project inside a directory called riscv.

Then came time to install the target and also the llvm tools. This can be achieved using the rustup tool.

rustup target install riscv32imac-unknown-none-elf
rustup component add llvm-tools-preview

Now we are ready to go

Configuring the appropriate dependencies

For my project, I started by having two dependencies. The riscv runtime crate and the panic-halt crate. The following goes into my Cargo.toml file

[dependencies]
riscv-rt = "0.6.1"
panic-halt = "0.2.0"

Configuring to build

You need to configure the compiler to use the right build target, but also to use the correct linker script. The linker script shown below will depend on the specific board you have, and in this case isn't an accurate example.

MEMORY
{
    FLASH : ORIGIN = 0x00000000, LENGTH = 256K
    RAM : ORIGIN = 0x20000000, LENGTH = 64K
}

REGION_ALIAS("REGION_TEXT", FLASH);
REGION_ALIAS("REGION_RODATA", FLASH);
REGION_ALIAS("REGION_DATA", RAM);
REGION_ALIAS("REGION_BSS", RAM);
REGION_ALIAS("REGION_HEAP", RAM);
REGION_ALIAS("REGION_STACK", RAM);

Then we must create a .cargo directory, and within that create a config.toml file

[target.riscv32imac-unknown-none-elf]
rustflags = [
    "-C", "link-arg=-Tmemory.x",
    "-C", "link-arg=-Tlink.x",
]

[build]
target = "riscv32imac-unknown-none-elf"

main.rs

Next we need our main.rs file. This would have been generated by the cargo tool when we created the project.

To start we need to add the following to the top of the file

#![no_std]
#![no_main]

The first line tells Rust that we are not operating in the standard Rust environment, and won't be using the standard library. A much better write up can be found in the rust embedded book. The second line prevents the main symbol from being emitted from the crate.

Now we must define the panic behavior and import the riscv runtime entry

use panic_halt as _;
use riscv_rt::entry;

Then we mark the main() function as the entry point for the code.

#[entry]
fn main() -> ! {

Putting it all together we can get quite a simple rust application that is as follows

#![no_std]
#![no_main]

use panic_halt as _;
use riscv_rt::entry;

#[entry]
fn main() -> ! {
    loop {
        // your code goes here
    }
}

Compiling

Now that you have the code all in place, it's time to build. That can be achieved by running

cargo build

Then running a command like objdump will show you the compiled source code to convince yourself that it's actually generated RISC-V assembler

cargo objdump --bin riscv -- --source

Next we'll download the code onto a RISC-V processor and execute it

My turbulent ride with Australian Internet

I've now lived in Australia for about 7 months. In that time I've had three ISPs. Yes, three. In the last 3 and a half years in New Zealand, I had one ISP. This, unfortunately, has enlightened me to the sad state of internet in Australia and made me realise how lucky a country like New Zealand is with its Fibre to the Home roll out.

When I first moved into my property here I looked at a range of providers. Unfortunately, the property wasn't in an NBN area (due next month), so the only options we had were ADSL, Optus Cable, or 4G Home Broadband. This ruled out some providers who are NBN only, such as the highly-rated Aussie Broadband. I was pretty tempted by Optus Cable, however, I didn't want to spend the $200 connection and installation fee as I was renting and NBN was coming within the year, and I didn't want to commit to a 24-month contract. So I decided I'd go with ADSL because it at least was unlimited data. I looked at companies like iinet, Internode, TPG and was all ready to sign up for Internode (because I wanted IPv6). Unfortunately, I was put off by the upfront cost and lead time. As Internode is owned by TPG, to get cheaper pricing due to "naked DSL" I'd have to wait up to 2 weeks to be moved to a TPG DSLAM.

I decided I wanted to avoid that, so instead, I looked at other options. I ended up settling on Telstra. The reasons for this were that they'd not charge me any connection fees, no contract, and they'd give me a modem that included 4G backup. This would allow me to have Wi-Fi at home right away and also if my DSL went down I'd be able to still have internet. Seemed like a good deal.

A couple of days later my Telstra modem arrived. We plugged it in, hooked up the phone line. Of course, as the ADSL hadn't been connected yet, the 4G backup kicked straight in. Awesome!

I kept a keen eye on the Telstra order tracker because the 4G backup got a little slow at night. Eventually, it said that I was connected. But ... no DSL sync. No matter, 4G backup will cover it until it's sorted. On the phone I get to Telstra technical support to lodge a fault, I queried whether it might be an issue with the addresses (my neighbour mentioned issues with getting services connected on the subdivided section). I was told one of the biggest lies I've ever heard "All of our lines are 100% perfectly connected". Yeah sure.

Anyway, fault lodged, I'll keep using the 4G unlimited backup. Or so I thought. A couple of days later even that stopped working. Contacted Telstra and am told, "oh you're out of data". I said well it's supposed to be unlimited. They agreed and added more data. That didn't help. Instead that evening I call them back and am given the usual restart the modem, etc, etc. They even tried to debug the DSL again. Eventually, they decide the modem must be broken.

New modem arrives, 4G backup works, no DSL. So a technician is scheduled. He comes to the house and then disappears to the exchange. Remember all those 100% perfectly connected lines? Not so 100% perfectly connected. He sorts that out and we're good to go. A week later, back to 4G backup, left it a couple of days and it resolved itself. No worries, worked like it should right? Two weeks later, 4G backup again. Leave it a week, still no DSL. Back on the phone, all the usual hassle and then a fault lodged. Next day ... no 4G backup either. Login to my account ... "No active services". I thought that was quite odd, so I queried Telstra.

They'd cancelled the account. Hey, when something might be too hard, just pretend it didn't exist. Over the next two months, I fought a battle with them, trying to return a modem, be credited back money for a service they never provided, and even an overdue bill notice. This all ended up with my lodging a formal complaint with Telstra.

At this point, I probably should have just gone with a TPG provider. But instead, I signed up with Exetel. Exetel themselves were great to deal with, however, Telstra strikes again. After two weeks, I still had no internet. This was despite multiple technicians being sent to the exchange. In the end, I cancelled and asked for a full refund. I had to return the modem as part of that.

I was at this point so frustrated, that my only viable option was to get 4G home broadband. I looked at Exetel, but they limit it to 12/1 Mbps. I considered Yomojo who are full speed but slightly cheaper than Optus. In the end, I just went direct with Optus. I figured it would be easier to get it straight from the network operator.

And since then I've had solid reliable internet. Now and then it slows down, but it generally works fine. The only issue I have is that 500GB can be a bit limiting, especially now we're working from home and watching a lot more Netflix. Their overage options are quite poor, but when your options are quite limited you take what you can get.

NBN becomes RFS at the end of next month. I thought hard about whether to stay on 4G (unfortunately I'm two houses outside of Optus 5G coverage, otherwise I'd snap that up in an instant) or change to the NBN. In the end, 500GB is too limiting for us, and we'll be getting an NBN connection. Now I just have to choose a provider.

Using an OpenWRT router for 2degrees (Snap) IPv6

2degrees Broadband (previously Snap) offer IPv6 to all their residential customers, with the preferred method to use one of their CPEs the Fritzbox 7340 or the Fritzbox 7390. These devices come with a price premium, so I decided to look for a cheaper alternative with what I already have. I have a Draytek Vigor 120 and a TP-Link TL-WR1043ND with OpenWRT installed. I use the Vigor 120 in bridged mode to allow my WR1032ND to hold up the connection using PPPoE. This should work on their ADSL2+, and UFB connections (you can VLAN tag the WAN port as VLAN10 which I believe is required for UFB, but I won’t go into how to do this). I am unsure if it will work with VDSL2 however.

I am unsure whether this method will work with a non bridged modem (unless of course it’s capable of doing IPv6 itself, in which case you probably don’t want this tutorial), but I suspect it won’t as your PPP session needs to be assigned a link-local IPv6 address.

This assumes you are already using PPPoE to connect via an OpenWRT box, if you’re not I advise setting this up first. The OpenWRT website provides excellent instructions on how to do this.

The first step needed is to install the required packages into OpenWRT. I am using Attitude Adjustment, but have previously used Backfire so these settings should still work.

ppp-mod-pppoe for pppoe connectivity
kmod-ipv6
wide-dhcpv6-client
radvd

The first step to do is to enable IPv6 negotiation on your PPP link. This can be done through luci under Network > Interfaces > WAN > Advanced Settings and selecting Enable IPv6 negotiation on the PPP link. Alternatively if you prefer to edit the configuration files you can add option ipv6 ‘1’ to your /etc/config/network file under config interface ‘wan’

Snap use DHCPv6 to allocate you a dynamic IPv6 prefix (they don’t provide static IPv6 prefixes yet, they really should but I understand this is a work in progress). Therefore you need to use a DHCPv6 client to get it.

The /etc/config/dhcp6c file should already exist. Edit it and change the enabled, interface, pd, and config interface ‘lan’ settings to be the same. Do not configure a prefix on the WAN interface. I find it causes IPv6 to not work. If it doesn’t create it and copy the following into it.

config 'dhcp6c' 'basic'
        option 'enabled' '1'                            # 1 = enabled; 0 = disabled
        option 'interface' 'wan'                        # This is the interface the DHCPv6 client will run on
        option 'dns' 'dnsmasq'                          # Which DNS server you run (only dnsmasq currently supported)
        option 'debug' '0'                              # 1 = enable debugging; 0 = disable debugging

        # Send options (1 = send; 0 = do not send)
        option 'pd' '1'                                 # Prefix Delegation
        option 'na' '0'                                 # Non-Temporary Address
        option 'rapid_commit' '1'                       # Rapid Commit

        # Request options (1 = request; 0 = do not request)
        option 'domain_name_servers' '0'
        option 'domain_name' '0'
        option 'ntp_servers' '0'
        option 'sip_server_address' '0'
        option 'sip_server_domain_name' '0'
        option 'nis_server_address' '0'
        option 'nis_domain_name' '0'
        option 'nisp_server_address' '0'
        option 'nisp_domain_name' '0'
        option 'bcmcs_server_address' '0'
        option 'bcmcs_server_domain_name' '0'

        # Override the used DUID, by default it is derived from the interface MAC
        # The given value must be uppercase and globally unique!
        #option 'duid' '00:03:00:06:D8:5D:4C:A5:03:F2'

        # Script to run when a reply is received
        option 'script' '/usr/bin/dhcp6c-state'

# Define one or more interfaces on which prefixes should be assigned
config 'interface' 'loopback'
        option 'enabled' '1'                            # 1 = enabled; 0 = disabled
        option 'sla_id' '0'                             # Site level aggregator identifier specified in decimal (subnet)
        option 'sla_len' '16' # Site level aggregator length (64 - size of prefix being delegated Snap is 64-48 = 16)

config 'interface' 'lan'
        option 'enabled' '1'
        option 'sla_id' '1'
        option 'sla_len' '16'

Now if you restart your router you should get a IPv6 prefix assigned to your LAN interface. Great! That’s what we wanted, but it’s not much use if we can’t tell the machines on our network what our IPv6 prefix is.

That is where radvd comes in. It’s a router advertisement daemon, that can be used to distribute our prefix to our clients. To configure radvd edit the /etc/config/radvd file and update the settings to match below. You must NOT put a prefix in the list prefix because we are assigned a dynamic prefix and we need radvd to work out what prefix to advertise to our connected devices.

config interface
        option interface        'lan'
        option AdvSendAdvert    1
        option AdvManagedFlag   0
        option AdvOtherConfigFlag 1
        list client             ''
        option ignore           0

config prefix
        option interface        'lan'
        # If not specified, a non-link-local prefix of the interface is used
        list prefix             ''
        option AdvOnLink        1
        option AdvAutonomous    1
        option AdvRouterAddr    0
        option ignore           0
        option AdvValidLifetime 3600
        option AdvPreferredLifetime 600

config route
        option interface        'lan'
        list prefix             ''
        option ignore           1

If you prefer change AdvValidLifetime and AdvPreferredLifetime to something higher. I did this because I often restart my router and am given a new prefix, and my machines will prefer and use the old one until expiry, meaning I can’t establish IPv6 connections. My recommendation is to use something like 3600 for both if you have a stable connection that rarely gets rebooted.

Save that file and restart your router, everything should work and you should have IPv6 connectivity on your machines. Be aware that your machines will not get a IPv6 DNS server so all the DNS queries will still be executed over IPv4. This generally isn’t a problem as DNS servers should still return IPv6 records to you regardless of the version used to access them.

As all your devices that support IPv6 now have a globally routable address I recommend having firewalls turned on on everything. However sometimes we have phones etc that don’t have firewalls and you perhaps you want to only allow incoming traffic to that device if there has been outgoing traffic first. OpenWRT can handle this in iptables. Installing the following packages and reboot should do the trick. I say should as I can’t exactly remember whether I had to do more, but if it doesn’t work leave a comment and I’ll investigate my configuration.

ip6tables
kmod-ip6tables

Hurricane Electric DNS Hosting Service

I recently wanted to move my DNS from my web hosts cPanel managed system because I really dislike cPanel (they are soon moving to Plesk, but this provided the perfect option to seperate my services). I looked around for DNS hosting services and looked at Zonomi a New Zealand based paid service with DNS servers located around the world. However I was worried about the cost as one of my domains alone would take up the allocated records.

I asked around and was told about Hurricane Electrics DNS service. It’s free and feature packed. It can handle a load of record types (including SSHFP) and can even do reverse zones if you need them. One of it’s neat features is built in dynamic DNS. I previously had a CNAME in my DNS pointing to a dyndns address from no-ip.com. Now it’s just a A record. Awesome!

They are a service I would highly recommend.

If you domain registrar allows you to specify the IP addresses of the DNS servers then you can use this to configure vanity NS records, by changing the NS records at your registrar to something like ns1.your-domain.com and the IP address of ns1.he.net etc up to ns5. ns2-ns5 provide a Dual Stacked IPv6 service, which is a neat benefit.

Finally if you are worried about query times for users outside the states, it’s not that bigger deal as most visitors will be using recursive DNS servers provided by their ISPs or employers and will cache your records, so subsequent users will receive the responses fast until they expire and need to be re-requested.

Thanks to Brad Cowie for pointing me to them

Mobile Number Portability

Time and time again I hear people in New Zealand say the thing that stops them from changing mobile phone telco is the fact they have to change their number. And over and over again I hear people referring to themselves as being on 021, imply they are on Vodafone, or 027 and imply they are on Telecom. Well Vodafone, Telecom, 2degrees etc do have their own block of numbers that they allocate to know customers (021, 027, 022) etc, but that doesn’t mean that the number is on that network anymore, and it hasn’t for a long time. My phone number starts with an 027, but I am not on Telecom anymore. I changed to 2degrees over 6 months ago, and haven’t really looked back. It is really simple in New Zealand, and can take as little as 3 hours. (When I did mine it took about 3 hours from when I requested it, until it was done, but it can take longer). 2degrees simplify this process. If you are a 2degrees customer, you can login to their My 2degrees portal, and request your number be ported. All you need is the SIM card number from your old SIM card, or the ESN number if you are on Telecom CDMA (which is closing next year anyway). This information is also required for other networks, but I am not sure what their process is for getting a number ported from another network to theirs.

So now, people no longer have the excuse of “I can’t change my number” when it comes to making a decision about what Mobile Telco to give their business to. It’s completely irrelevant. So now competitive pricing, and high quality service, is even more important if Telcos want to keep their customers (I’m looking at you Vodafone and Telecom).

Reducing Web Server load using Amazon S3

Anyone who runs a website, will know that eventually a website will (hopefully) become so large, and popular that one server is simply not enough to host all the content or load that is thrown at it. A common method to reduce this is just to add more servers in and load balance them. But what if you can’t afford more servers. Well there is a very cheap alternative. This is Amazon’s S3 hosting. It is a cloud storage technology provided by Amazon Web Services, which provide extra features, likely access control, enabling public access and setting custom headers. The ultimate goal would be to use a fully fledged Content Distribution Network, but for starters Amazon S3 easily does the trick. All you have to pay for is the storage space you use, and data you actually transfer.

So how does this help, well by placing your content (images, video, even say CSS) on Amazon S3 and using an Amazon S3 address to link to the content, then the end user will pull the content from Amazon S3, reducing the number of connections needed on your server, and the amount of data your server needs to send, enabling it to answer other requests faster. Not only that you can provide Cache tags on the files meaning that the client will cache the file, to stop you incurring extra costs of the end user requesting the file all the time. Not only this, it makes it faster for the user.

I use Amazon S3 on my blog, and by assigning the S3 bucket name as a CNAME on my domain, I can use a nice URL to access my content, making it look highly personalized. Not only that if you are using Wordpress their are a number of addons that allow for Amazon S3 integration, my favourite being WP Total Cache, which will upload the files that it thinks should be served statically, and automatically rewrite the URLs to them. Not only that, if you change to Amazon CloudFront it will easily allow you to change to that.

So if you are having issues with your website being overloaded with traffic especially when it is images etc, try moving it to Amazon S3. And once it is in their, if you decide you need to added power of the Amazon CDN, it’s extremely simple to setup and use your pre-existing content in S3 as the source.

Limiting bandwidth on Apache

Now I don’t actually need to enforce data limits on my Apache virtual hosts, because I only host myself, so what I use is irrelevant, but I thought it would be interesting to find out how to do it. I know there are modules out there, but the one I found which I like is mod_cband. It works really well, and enables you to enforce data limits, but also speed limits and number of connections if you are experiencing heavy load say and wish to throttle it a bit. These instructions are based on Ubuntu Server 10.04LTS.

So to start I downloaded the mod_cband source code, available here. After extracting it you then need to compile it. To compile it you need to have APXS2 installed, which I installed by using the following command

sudo apt-get install apache2-prefork-dev

I assume if you are using the threaded version you would need to install the apache2-threaded-dev package. Now after you have installed this package, you can execute ./configure to begin the first phase. This will check dependencies etc and tell you if you are missing anything. You shouldn’t do, but if you are, resolve them before you continue.

Next you need to actually compile the source code. I had a problem here with the Makefile. You need to alter the Makefile slightly. You need to change the line that reads

APXS_OPTS=-Wc,-Wall -Wc,-DDST_CLASS=3

and add in -lm so that it now reads

APXS_OPTS=-lm -Wc,-Wall -Wc,-DDST_CLASS=3

If you don’t you will get an error when you try to start Apache.

Now execute make and when that is completed execute sudo make install. You will get some warnings here regarding comparison of different types, but they shouldn’t effect the running of the module. This will compile the library and install it into the correct location. Now to check that it is enabled type sudo a2enmod cband (assuming of course you are using a2enmod, otherwise you will have to manually edit the configuration files to check that it is there.

Now restart Apache and it should load everything correctly. Now it is time to configure Apache virtual hosts to limit bandwidth. There are many configuration options here, so I will only explain how to enforce Data limits. For other options refer to the mod_cband documentation.

To enable mod_cband your virtual hosts MUST have a ServerName directive, and all cband directives must come after this. If you don’t Apache will throw all sorts of warnings when you try and start it. So the simplest and quickest way to enable a data limit on a virtual host is by placing the following directive in the configuration.

CBandLimit 10M

That will place a limit of 10Megabytes on the user, which is pretty small, and I am sure no one would actually give something that small, but it is very simple to setup. There are many other options you can configure, like a page to send when the bandwidth is used up, or what HTTP code to send, and many more. You can also configure a page that allows you to see the status of the virtual hosts and their various restrictions.

Updating METADATA on Amazon S3 objects

So I host the static content from my blog on the Amazon S3 Simple Storage Service. This allows me to remove some of the load of my server for static content. However this means that over time I need to pay money for the S3 hosting, and if I have a lot of requests this could end up costly. So how do I get around this. Well by setting the Content-Control META tag onto the objects in S3, I can ensure that the static content is cached by the remote user for however long I want. In this case I have set it for 7 days. However updating all the files in S3 would take a long time to do manually, so I use this Python code to update the objects in my S3 bucket.

I had to modify it to support encoding as I use gzip encoding on some of the static content to reduce the amount of data needing to be transferred. :::python from boto.s3.connection import S3Connection

connection = S3Connection('API_KEY', 'API_SECRET')

buckets = connection.get_all_buckets()

for bucket in buckets:
    for key in bucket.list():
        print('%s' % key)
        encoding = None
        if key.name.endswith('.jpg'):
            contentType = 'image/jpeg'
        elif key.name.endswith('.gif'):
            contentType = 'image/gif'
        elif key.name.endswith('.png'):
            contentType = 'image/png'
        elif key.name.endswith('.css.gzip'):
            encoding = 'gzip'
            contentType = 'text/css'
        elif key.name.endswith('.js.gzip'):
            contentType = 'application/x-javascript'
            encoding = 'gzip'
        elif key.name.endswith('.css'):
            contentType = 'text/css'
        elif key.name.endswith('.js'):
            contentType = 'application/x-javascript'
        else:
            continue
        if encoding is not None:
            key.metadata.update({
                'Content-Type': contentType,
                'Cache-Control': 'max-age=604800',
                'Content-Encoding': encoding
            })
        else:
            key.metadata.update({
                'Content-Type': contentType,
                'Cache-Control': 'max-age=604800'
            })
            key.copy(
                key.bucket.name,
                key.name,
                key.metadata,
            )
            key.set_acl('public-read')

Enabling IPv6 on a home network

IPv6 is the next generation internet protocol. Currently few ISPs provide it to the customers, and therefore uptake is slow. However if you wish to have access to the IPv6 world now then there are options. If you only have a single machine than a tunnel is fine, but however if you wish to add it to an entire network then you need something more. If you have a spare old machine lying around, or a machine running Linux that is always on, then you can configure that as a router and use it to provide IPv6 to your LAN.

I have IPv6 connectivity to all the machines that are connected to my network. To achieve this, I use an Ubuntu Linux box as a router, which has a tunnel configured. This allows all the computers to connect onto the IPv6 internet transparently. This is a guide on how I did it.

I use sixxs.net as my IPv6 tunnel provider. They provide the use of the aiccu client which allows the configuration and setup of the tunnel automatically. It creates a interface sixxs which is one end of the tunnel. First things first, you need to register an account at sixxs.net. After your account is approved you are able to create an IPv6 tunnel. This will only allow you to connect one machine, but it is essential before you will be able to enable access to other machines. This will take a while to get approved, but once approved you can install the aiccu client. On Ubuntu you can install it using:

sudo apt-get install aiccu

During setup it will ask you to enter information regarding your tunnel, most likely your sixxs.net login information. Once entered it should authenticate and complete the installation. If it hasn’t started automatically, you need to start it.

sudo service aiccu startOr on older version of Ubuntu try sudo /etc/init.d/aiccu start

Then it will configure the tunnel and you should be able to connect to IPv6 sites. You can try this by typing traceroute6 ipv6.google.com. The next thing to do is to provide IPv6 addresses to your network. To do this, you must apply for a Subnet from sixxs. You will receive a /48 subnet, for which you assign /64s to your network. To distribute your prefix announcement onto your network you need something like radvd installed. Again on Ubuntu it is as simple as typing

sudo apt-get install radvd

Now once radvd is installed, you need to edit the configuration file. This is usually stored in /etc/radvd.conf. So open it up and you want to enter the following:

interface eth0
{
  AdvSendAdvert on;
  AdvManagedFlag on;
  prefix 2001:4232:532::/64
  {
    AdvOnLink on;
    AdvAutonomous on;
    AdvRouterAddr on;
  };
};

The prefix is from the subnet that sixxs has assigned you. In this case I was assigned 2001:4232:532::/48, so I chose to use the /64 of this for simple setup.

Now of course your interface that is connected to your IPv4 LAN, so what will now be the interface on your router not connected to IPv6 web, should have a static IP assigned to it. This makes it easier to remember, and use. So I just assigned 2001:4232:523::1 to eth0. I won’t cover how to do this, as it is relatively simple if you have done any networking in Linux before.

You now need to tell the linux kernel that you want it to forward traffic for IPv6. To enable IPv6 forwarding you need to edit /etc/sysctl.conf and add the following lines:

net.ipv6.conf.all.forwarding=1
net.ipv6.conf.default.forwarding=1

Now save this file and reboot. When the machine comes back up, check that aiccu and radvd have started ( I find I always have to start aiccu manually). If this is the case then your other machines should have Global IPv6 addresses assigned to them using the prefix you gave radvd. However I found this was not enough to allow my other machines to connect to the internet. After specifying the default route on the router as the IP at the sixxs end of the tunnel, all traffic from eth0 was then routed out over my tunnel, and all the other machines appeared to have native IPv6 connectivity, and were globally addressable. You therefore need to ensure that your machines have firewalls installed, and if you like setup IPv6 iptables on the router. This is what I have done to filter traffic that is not wanted in the network. Also as your IPv6 address will be based on your MAC Address, you can be easily tracked based on it. Windows by default enabled privacy extensions, but Linux does not. To enable this on your Linux clients edit /etc/sysctl.conf and add these lines:

net.ipv6.conf.wlan0.use_tempaddr=2
net.ipv6.conf.all.use_tempaddr=2
net.ipv6.conf.default.use_tempaddr=2

If you have eth0 then replace wlan0 with eth0 or add an extra line for each different interface. all and default should cover all of them, but I like to specify them individually as well just to be safe. I will write another article regarding IPv6 tables at a later date.

World IPv6 Day

World IPv6 day is on June 8, 2011. World IPv6 Day is a day where several large organisations, such as Google, Facebook, Yahoo, Akamai etc will offer there content of IPv6 for a day. I’m ready for it, and so on June 8, I will be browsing these sites in IPv6 for the day!

Here is what I get when I traceroute from my machine to ipv6.google.com

Tracing route to ipv6.l.google.com [2404:6800:8004::68]
over a maximum of 30 hops:

1 1 ms <1 ms 2001:4428:450::1
2 28 ms 26 ms 27 ms gw-113.wlg-01.nz.sixxs.net [2001:4428:200:70::1]
3 29 ms 26 ms 27 ms ge0-1-6.v6wlg0.acsdata.co.nz [2001:4428:0:6::1]
4 39 ms 50 ms 38 ms ge0-0-2321.v6akl1.acsdata.co.nz [2001:4428:0:911::4]
5 38 ms 39 ms 38 ms ten-0-0-0-134.bdr01.akl02.akl.VOCUS.net.au [2402:7800:110:511::d]
6 43 ms 38 ms 38 ms ten-0-2-0-400.bdr01.akl01.akl.VOCUS.net.au [2402:7800:110:1::1a]
7 62 ms 65 ms 62 ms 2402:7800:0:1::ca
8 62 ms 63 ms 94 ms 2402:7800:0:2::92
9 64 ms 63 ms 64 ms 2001:4860::1:0:9f7
10 67 ms 73 ms 70 ms 2001:4860:0:1::d7
11 63 ms 142 ms 65 ms 2404:6800:8004::68

Trace complete.

Flickering Flash in Firefox on Ubuntu x86-64

I have been plagued by an issue in Firefox when using Flash on 64-bits of Ubuntu, from around version 10.10 This issue was whenever I visited a website that used Flash, the Flash content would flicker, with white spots all over the Flash content. This was very annoying as I was not able to access sites such as Youtube. To get around this I just used Chromium for sites that used Flash.

However recently I discovered this was an issue with version 10.1 of Adobe Flash player, and that using the 10.3 beta solved the issue.

Here is how to install it. In the terminal window type:

sudo add-apt-repository ppa:sevenmachines/flash
sudo apt-get update
sudo apt-get install flashplugin64-installer

Yahoo unlocks IMAP access

Up until recently it has been near impossible to access Yahoo IMAP through any client that wasn’t the Zimbra Client, or else a device like a Blackberry or Apple iPhone etc. Zimbra sends a special command to the Yahoo IMAP servers which authenticate it as an allowed client. I had been using a modified version of Thunderbird that sent this command also, as I prefer the interface and search in Thunderbird.

However it now seems that Yahoo have allowed access to their IMAP servers without this command, and as a result all clients should be able to connect now. I have set up my unmodified Thunderbird on my Ubuntu laptop to connect and it worked fine. I also tried Outlook to see if it worked, and it worked without a hitch. One thing however that is not clear is whether or not this is temporary or whether Yahoo are now offering it free to all users. It is possibly part of their plans to become more competitive in the Webmail market, after having suffered a 10% loss in the number of users in the last year to rival services. This has made it the second largest behind Microsoft’s Windows Live Mail. Google has had a 21% increase in the number of users, and they of course offer IMAP access.

Incoming Server Settings
IMAP Server: imap.mail.yahoo.com
IMAP Port: 993
IMAP security: SSL/TLS

Outgoing Server Settings
SMTP Server: smtp.mail.yahoo.com
SMTP Port: 465
SMTP security: SSL/TLS

The SMTP requires authentication, the same username and password you use to authenticate to the IMAP server.

I find that if your email address is [email protected] then user will work as your username, but I have not tried the full email as the username.