Posted tagged ‘Documentation’

Configuring a Generic Wireless LAN Bridge

August 18, 2013

Some time ago, I purchased a generic 802.11n Wireless LAN repeater, after having issues with receiving a reliable wireless connection to my ADSL residential gateway/router due to interference from neighbouring networks, and the signal propagating poorly throughout the back of the house.

I ended up buying it as a “cheap workaround”, after getting frustrated with being unable to maintain a reliable connection on my laptop, or various phones – despite trying things like switching to less congested channels (usually 9, or 13), increasing transmission power rates (at the expense of battery life); and decreasing maximum bitrates to ridiculously-levels.

Technically, the bridge is a small, embedded Linux-based device with a RealTek RTL8186 MIPS-derived system-on-chip, supporting Ethernet, and 802.11; PCI, and even PCM audio. (Which seems like overkill – but I suppose that it’s probably cheaper to design, and distribute a universal, multipurpose chipset, than it would be to produce cut-down variants).

Common complaints that buyers on Amazon had were that the firmware was buggy, the device was unreliable, and that the device’s Web-based configuration tool supposedly became inaccessible, post-configuration.

However, I found that if I ignored the supplied instructions, and attempted to configure the device by using a direct connection to my PC using Ethernet, I was able to configure it in a reliable manner that allowed for continued access to its administration server, as well as roaming throughout the house using the same SSID

Please note that the values in the screenshots are specific to my home network, and are not factory defaults.

Unfortunately, since I have discarded the packaging, I can’t provide photos of its contents – although I’m sure that they consisted of at least:

  • The device itself
  • An instruction leaflet
  • A short Ethernet cable
  • A detachable 3-pin UK mains plug

To begin with, I unpacked the product; plugged into a power outlet, and connected the Ethernet cable to my laptop.

Next, I temporarily disabled the use of my laptop’s wireless LAN chipset, using the hardware kill-switch, to ensure that requests for multiple IPv4 addresses via two hardware interfaces won’t be made.

The bridge ships configured with a DHCP server enabled by default, issuing IPv4 addresses within the 192.168.10.x range, and reserving for itself – but we’ll change this, later.

First, navigate to the default configuration URL (, using your preferred browser:

Next, select the “Wireless” option from the “Professional Setup” category:

Here, set the following options:

  • “Mode”: “Repeater”
  • “Network Type”: “Infrastructure”
  • “SSID of Connect to” (sic): Enter the SSID of your primary access point
  • Check the “Enable Universal Repeater Mode (Acting as AP and client simultaneouly” (sic) checkbox
  • “SSID of Extended Interface”: Enter the SSID of your primary access point again

If you have configured your primary access point to enable operation in 802.11n mode, then select either “2.4GHz (N)”, “2.4GHz (G + N)”, or “2.4GHz (B+G+N)” from the “Band” menu. Otherwise, “2.4GHz (B+G)” should deliver satisfactory performance for most WAN-based activities.

In order for seamless handover between access points using the same SSID to work well, it is then necessary to set the default 802.11 channel of your primary access point to 11. (This seems to be a hard-coded, preset value).

It may also be necessary to configure WEP, or WPA keys for your primary access point on the “Security” page; and I also recommend changing the configuration tool’s access credentials to something more secure than their default values.

After configuring these settings, the bridge will reboot automatically…

Once the bridge has rebooted, return to the “LAN Interface” page:

On this page, set the “DHCP” setting to “Client”, and apply changes. At this point, because the bridge will attempt to obtain its configuration IPv4 address from your primary access point/gateway’s DHCP server, it’d be a good idea to disconnect the Ethernet cable, and re-enable the PC’s wireless LAN interface.

Hopefully, at this point, a successful WLAN connection will have been made.

However, in order to maintain future access to the bridge’s configuration server, it is necessary to assign a static IPv4 address to its MAC address (printed on a sticker on the case – but it’ll probably also appear in system/DHCP server logs) using the configuration tool of your primary access point.

After configuring this, power-cycle the bridge, and attempt to access its configuration page via the newly-set IPv4 address. If successful, you should be presented with an authentication dialogue, and be able to see the configuration tool’s status page, as at the beginning.

Hopefully, this will be of assistance to other owners of these devices, or folks contemplating purchasing one.


Thoughts on Distributed Window Systems

October 12, 2011


In this post, I’ll propose Amadeus – a design for an experimental highly-distributed, extensible, and network-transparent windowing system that supports resolution-independence through the use of vector graphics technology, and scales from a single PC (or other device) to a large cluster of devices.

This will hopefully be achieved via the meticulous usage of compression, and adaptive data structures; the adoption of a multicast/broadcast architecture, and orchestration from federated or standalone Registrars.

My decision to write this proposal was inspired by interesting discussions with a friend regarding implementations of other systems (Display PostScript, NeWS, and the many window servers of Symbian OS); and reading various tirades against X11, plus documentation regarding various proprietary systems (especially Photon).

However, the system itself has a brand-new design (as far as I’m aware) – and probably plenty of stupid design mistakes that were rectified by others in the past…

High-Level Architecture

The Amadeus architecture consists of 4 main components:

  • The Rendering Surface – responsible for accepting hardware events, and rendering graphics data received from Applications via private channels or network broadcasts.
  • The Registrar – responsible for coordinating the activities of the rest of the architecture.
  • Applications – either built directly against a library that generates vector graphics data, and sends it to the Rendering Surface; or against a ported GUI widget toolkit (or compatibility layer for an existing windowing system) designed to do so.
  • Hardware Event Providers – responsible for collecting and dispatching hardware events.
These roughly fit together, like so:

Responsibility for font rasterisation and management is either directly accounted for by the Rendering Surface implementation, and a suitable rasterisation library (e.g. FreeType or MonoType iType) – in the case where text is embedded in SVG data, or by applications designed to deliver pre-rendered text as vector shapes and paths.

Descriptions of other window system architectures, such as Microsoft’s GDI and various successors, X11, Apple’s Quartz, QNX’s Photon, NitPicker,’s Wayland, and variants of Display PostScript are best found elsewhere.

Authentication and Confidentiality

Authentication and confidentiality of signalling and graphical data network traffic is beyond the scope of this proposal – as other parties have more expertise in that area, and I feel that I cannot immediately improve upon existing designs. Those interested in such functionality should investigate IPSec (which should work with multicasting, according to this paper from Cisco), and SSL/TLS (or SSH) for private channels.

The Registrar

The Registrar announces its availability upon launch via network broadcasts (or multicasts); and tracks Rendering Surfaces, HWEPs and client applications interested in either listening for hardware events, or displaying windows on a per-machine, cluster, or network basis by unique name.

It also stores client configuration data (e.g. connected display resolutions, the characteristics of connected HIDs (keyboards, mice and game controllers), and preferred data structure widths), in addition to orchestrating the window management activities of machines within a cluster.

Whilst the intention is obviously to eventually support large-scale clusters and networks of “screens and machines”, with application windows freely distributed amongst them (in the case of X11-style invocation and display of applications installed on other machines); multiple Registrar instances (supporting multiple clusters or standalone PCs) shouldn’t conflict with each other, and it should always be possible for users to decide upon levels of isolation, and appropriate network topologies.

Multicast DNS (as implemented in Apple’s Bonjour/mDNSResponder, and Avahi), or D-BUS might be feasible technology options for implementing parts of this functionality.

The Rendering Surface

The Rendering Surface implementation accepts compressed SVG data from applications, and hardware events from HWEPs, received via either a private channel (e.g. a transient UDP socket with a port number known by the Registrar, a UNIX domain socket, or a platform-specific IPC mechanism), or a network broadcast/multicast transmission.

Instances thereof must register themselves using a unique name to the Registrar (and optionally specify a private channel), and accept any relevant events from Hardware Event Providers, such as mouse clicks and movements, and keystrokes.

The SVG data that applications generate should accurately render using any quality SVG implementation – regardless of its ultimate bitmap data destination (e.g. a raw framebuffer, or Qt’s SVG rendering widget), post decompression.

Compression Algorithms and Data Structures

Data structures used by Amadeus for signalling are designed to be extensible and easy to parse (explicitly identified using 64-bit field type IDs, and accompanied with content length fields); and capable of being dynamically resized to accommodate network connections with varying levels of quality (i.e. latency and bandwidth).

If so desired, it may be theoretically possible to adapt data structures from other protocols, to provide additional functionality – although in the case of some (e.g. the X11 Clipboard protocols), it would probably be a better idea to design new ones, in the long run.

Although it was originally intended that “plain” SVG 1.1 data be compressed according to the WBXML 1.3 specification from the Open Mobile Alliance, it should be possible to support alternative serialisations of an SVG data stream (e.g. transformed JSON), and additional compression algorithms (e.g. those implemented by ZLib/GZip) via the aforementioned extension mechanism.

An Open Source WBXML parsing and generation library (released under a variant of the BSD License, with a promotional clause), written in C++ is available from the Sybyx project on SourceForge. Although I haven’t attempted to use it (yet), I received the impression that its API is reasonably clean and well-designed.

Bilal Siddiqui’s article also provides a rough idea of a conversion technique, and an archive containing a set of  XML files in both encoded, and non-encoded forms.

I have attempted to register the SVG 1.1 XML Document Type Definition ID (-//W3C//DTD SVG 1.1//EN) with the OMA, using their online form – although the process failed, due to a configuration error of the Web server.

Compression – State of the Art

Whilst I’m unaware of other, entire window system architectures utilising SVG for drawing, the technology has been successfully utilised as a significant part of KDE’s Plasma family of desktop environments; for rendering icons in the AVKON/S60 user interface framework for Symbian OS (and its successors); and for a multitude of other applications.

However, most implementations tend to assume that graphics files are stored as plain text XML files within a local (or remote/removable) file system, and can be quickly accessed prior to rendering – which works well for most applications, but is likely to be extremely inefficient within Amadeus, due to its network-bound, distributed nature.

The most obvious solution to this problem would be to use one of the aforementioned compression techniques (or another) – which is reinforced by this paper (circa 2003) from the seemingly defunct developers of the X-Forge game development toolkit, concluding that their proprietary compression technology compares favourably to PNG and JPEG bitmaps, in terms of resulting file size.

X-Forge’s developers were also able to achieve a bitmap-equivalent level of apparent image quality for a static set of textured game graphics (and further size reductions when cramming all of their resources into a ZLib-compressed archive), in addition to resolution independence – although in the case of Amadeus, it is highly likely that graphics will consist of both vector widgets, and wrapped bitmap images (e.g. photographs, Web graphics, and pre-rendered widgets from applications built against non-native toolkits), which may affect efficiency.

From very brief testing with GZip’s -9 argument, I was able to compress a copy of the SVG file (exported from InkScape in “Plain SVG” format) corresponding to the architecture diagram to 1.00 KB (1,024 bytes), from 3.97 KB (4,075 bytes). The original, uncompressed version of the file, exported without optimisations to remove InkScape-specific metadata was 5.18 KB (5,314 bytes) in size.

As previously mentioned, I have not yet tested a WBXML implementation against these files, for comparison.

Bitmap Image Support

Through the use of data: URIs, and Base64-encoded image files, it is possible to embed bitmap graphics data into SVG content, so that the aforementioned common use cases (e.g. display of Web image content, and graphics from other windowing systems) can be supported.

This technique could be combined with a hybrid local-distributed image caching system that is integrated with the Registrar, delta encoding of SVG payloads, and checksum-based URIs for referencing extracted bitmap image data within the cache.

More sophisticated approaches to this problem are described in a technical reference document from NoMachine, related to their NX unicast display/window sharing architecture (which in turn is closely entwined with the X11 architecture).


In isolation, distributed windowing systems; multicast architecture-based distributed systems in general; and vector-based GUIs are nothing new, conceptually. What Amadeus brings to the table is an amalgamation of these disparate concepts in the form of a modern, flexible design, that should be suitable for supporting multiple high-resolution displays, and users with wildly varying requirements.

Whilst I’ve left issues related to high-performance rendering of 3D graphics and video content, latency, security, and various other architectural aspects unresolved; I’ve hopefully provided an interesting starting point for debate on the design of such systems.

Thoughts on Process Invocation with Qt and PoCo

June 9, 2011

Since late 2010, I’ve enjoyed developing C++-based applications with the Qt framework. Overall, I find it to be intuitive, fairly well-designed, and well-documented.

However, one area of the framework that I’ve found unsatisfactory (or at least struggled with, despite my best efforts) is the QProcess class. Ideally, I’d like to be able to instantiate it once, on a per-method (or per-class) basis with the appropriate executable path and CLI arguments, and then dump its output directly into a QString for use elsewhere.

I vaguely recall successfully resolving ~90% of the problem in Stroma (although that project’s architecture is currently rather monolithic) – but I never managed to solve the rest of it (actually dumping the output of the invoked process into a QString/multi-line text box widget).

After taking a break from that project for a while, I found myself encountering a similar problem in another project; and after reading various forum/mailing list posts and pieces of documentation, sought to find alternative means of resolution.

The most promising initial candidate was Boost::Process, although it appears that development is still ongoing – therefore, it isn’t an “official” Boost component yet.

With that I mind, I decided to install and investigate Poco::Process, after seeing a mention of it on StackOverflow, last night.

Installation under Fedora was trivial (involved running “sudo yum -y install poco-devel poco-doc” from a shell), and the appropriate header files were conveniently located in /usr/include/Poco for later perusal.

That said, I was sceptical about the quality and usability of the PoCo library itself, since the documentation and sample code felt relatively terse and incomplete; not to mention that guesswork (aided by this post) was necessary, when it came to actually integrating it into the build system.

However, once those hurdles are dealt with, basic integration with the Qt project build system simply involves appending the following to the project file:

#Headers for PoCo
HEADERS += /usr/include/Poco/Pipe.h /usr/include/Poco/Process.h \
/usr/include/Poco/PipeStream.h \

#Libraries for PoCo
LIBS += -lPocoFoundation

Once the project file is configured, adding the following to your class header file (modulo existing references, of course) should work :

#include <QApplication>
#include <QString>
#include <QDebug>

#include <string>
#include <cstring>
#include <cstdio>
#include <iostream>
#include <Poco/Process.h>
#include <Poco/Pipe.h>
#include <Poco/PipeStream.h>
#include <Poco/StreamCopier.h>

using namespace Poco;
using Poco::Process;
using Poco::Pipe;
using Poco::PipeInputStream;
using Poco::PipeOutputStream;
using Poco::ProcessHandle;

Some of the aforementioned header files are unnecessary – although their presence doesn’t seem to cause any obvious problems at compilation or application execution time.

My method in question (a rather rudimentary/brute-force mechanism for returning the MIME type of a file as a QString object, based upon sample code from a presentation slide), looks like: 
QString FileTypeHandler::GetMimeType(QString aFileName) {

    qDebug() << "Inside FileTypeHandler::GetMimeType";
    qDebug() << "Have text: " << aFileName;

    std::string mimeCommand("/usr/bin/file");
    std::vector<std::string> mimeArgs;
    Poco::Pipe mimeOutput;
    ProcessHandle mimeHandle = Process::launch(mimeCommand, mimeArgs, 0, &mimeOutput, 0);
    Poco::PipeInputStream mimeInput(mimeOutput);

    std::string mimeData;

    mimeInput >> mimeData;

    qDebug() << QString::fromStdString(mimeData);

    return QString::fromStdString(mimeData);


Theoretically/in essence, it accepts a file name as its argument, invokes the file -i command to (hopefully) look up its MIME type, stores it in a standard ANSI C++ string, and finally returns it as a QString for consumption by UI widgets.

In reality, it pretty much does just that – with the caveat that it doesn’t quite return the correct output. (It returns “/dev/null:” instead of “/dev/null: application/x-character-device; charset=binary“, for example).

Still, I hope that these notes are useful for others who are facing the same problem…

Maybe others can chime in with a better alternative, or other suggestions?

Project Iris: Affordable, Instant Connectivity for Syborg/QEMU

November 1, 2010

Apologies for not updating here as often as I wanted – although in order to keep things concise, I won’t detail the reasons for my hiatus in this post.

That aside, whilst I can remember the details, I’d like to share a proposal for a novel (in my humble opinion – but I’m prepared to be corrected) method of potentially using unmodified, off-the-shelf Nokia handsets as a modem under Symbian OS running on QEMU.

Please note that I have so far been unable to implement this, or test certain individual components (e.g. the Linux PhoNet stack); although I believe from the research that I’ve done that individual components should work in isolation.

Additionally, this isn’t intended to be a competitor to the excellent Wild Ducks project, or the ad-hoc efforts surrounding getting regular modems utilising Hayes/AT commands to work, either. (It’s for folks who for whatever reason either can’t afford to acquire a fully fledged Wild Ducks set-up, don’t want to commit themselves for the long-term, or just want a quick-‘n’-dirty way to test stuff that requires network connectivity).

With that in mind, I’ll introduce the architecture diagram, and hopefully try to provide further details – because a picture is apparently worth a thousand words:

Click to view full size

The system itself consists of the following components, in no specific order:

  • A version of QEMU with customisations specific to the Symbian Platform, as detailed in my ancient post on the Symbian Blog – and a few others, since then!
  • Two brand new components, which will be described in further detail later (the TI SSI bus “pseudo-modem” and the raw PhoNet-to-SSI bridge)
  • The Linux PhoNet protocol stack, which was contributed to the mainline Linux kernel by Nokia on behalf of members of what was once known as the “Maemo Computers” department (if memory serves correct)
  • Your favourite Nokia device, providing that it supports USB connectivity and the “PC Suite” profile – since that’s how we can access certain baseband services via PhoNet! (A well-kept secret, so it seems)…
  • The Symbian Platform (which consists of the Symbian OS, UI framework, middleware and other components) and the baseport – Syborg, in the case of Project Iris
  • Nokia’s baseband “TSY” (telephony support plug-in), which should work in conjunction with a well-designed TI SSI bus “pseudo-modem” and the raw PhoNet-to-SSI bridge to simulate the presence of a real Nokia baseband by proxy 🙂

The most interesting components are the TI SSI bus “pseudo-modem” and the raw PhoNet-to-SSI bridge, which are pivotal to making this thing work.

The raw PhoNet-to-SSI bridge can potentially either be integrated into QEMU, or left standalone –  although designing the IPC mechanism for the latter use-case is left as an exercise for the reader.

Communication with the device could occur via either a /dev/phonet0 device node (if such a thing existed, but according to this IRC log, it seems that it doesn’t under certain circumstances), or directly bound low-level datagram/pipe sockets to communicate with the user’s handset via raw PhoNet/ISI packets encapsulated in USB frames.

Obviously, the raw PhoNet-to-SSI bridge will encapsulate and decapsulate PhoNet packets that are transmitted/received by the handset into Texas Instruments-proprietary SSI frames for consumption by the “pseudo-modem”.

The “pseudo-modem” works in conjunction with the Nokia TSY (as mentioned earlier) and the raw PhoNet-to-SSI bridge; and will be a brand new, integral component of QEMU. It has minimal state of its own; and other than creating the illusion of a genuine Nokia/TI modem’s presence, it serves solely to transport packets between the bridge and the TSY.

Finally, the interaction between the TSY, network and telephony stacks and other parts of Symbian OS are extensively documented elsewhere.

For those curious about the title, the “instant” bit refers to the fact that as of recent versions of the Linux kernel and NetLink stuff, things should Just Work™ when a PhoNet device is connected (according to this page and this presentation from 2009), and that limited hardware knowledge is necessary to use one – just plug it in and switch it on.

The “affordable” bit refers to the fact that Nokia devices are relatively low-cost, easy to obtain, and plentiful (unlike specialist hardware such as the BeagleBoard and standalone GSM modems – as great as they are, for example).

A Quick-ish Update

August 23, 2010

Since folks have asked nicely, I thought that I’d compose a quick post, whilst being in the process of making last-minute preparations for a trip to London.

The grand plan is that I’ll be stopping in York with a relative tonight; and then taking a coach down to London for roughly two days, in order to spend some time at the Symbian Foundation as part of an ongoing volunteering arrangement.

I probably shouldn’t discuss the agenda at this stage, although the arrangement thus far is that I’ll be meeting with either William or Sebastian on the 24th to continue working on enhancing the developer documentation, in addition to doing something (hopefully exciting!) with the build team.

On the 25th, I’ll have an opportunity to catch-up with others, and meet folks whom I haven’t already seen (including Victor); plus I might have time to do some sightseeing on the London Eye.

Finally, I also have some (mostly negative) news regarding my ongoing university application saga to share, upon my return.

Playing with WiTango Server 5.5 and Apache 2.2.15 under Windows XP

March 28, 2010

I don’t particularly have a need to use this stuff; and the WiTango software itself is prehistoric (the last release for Windows was made in 2006, and the last release for UNIX clones/relatives was made in 2005), but I felt like a challenge.

After obtaining and unpacking the approximately 9MB archive; I preceded to run the installer, and followed most of the instructions provided – stopping only to obtain a “Lite” licence key using a spare GAfYD GMail account, as well as a copy of the Apache installer.

During the installation process, I ended up having to answer “Other” to a prompt regarding HTTP daemons; since Apache wasn’t installed, and I’ve been burnt previously by the outdated and otherwise dodgy WiTango Apache modules (the Linux ones don’t even work with Apache 2.2 as supplied by Fedora, and most of the supporting tools in the Linux package are broken – but that’s by-the-by).

Eventually, I was greeted with a dialogue with rather useless (incomplete) information, and peeked inside the directory at C:\Program Files\WitangoServer\5.5, to discover a Witango.exe executable that did nothing whatsoever when invoked, a rough ReadMe file, and various other DLLs, configuration files and directories.

I then installed Apache – ensuring that it wasn’t started as a service, and operated on TCP port 8080 by default, before returning to WiTango’s ReadMe file to look for the magic httpd.conf lines that would supposedly make things work.

Said ReadMe file recommends using the following – which wouldn’t work, unless you copied a ton of DLLs around:

# Witango Apache Plug-in configuration

# This loads the Witango 5.5 client for Apache 2.2 to
# enable communication with the Witango Application Server
LoadModule WitangoModule modules/
AddType application/witango-application-file taf tml thtml tcf

As an initial compromise, I resorted to appending the following to the bottom of the httpd.conf file, and restarting Apache:

#WiTango Bits

LoadModule WitangoModule "C:\Program Files\WitangoServer\5.5\PlugIns\"


AddType application/witango-application-file .taf .tml .thtml .tcf

After creating an empty file at C:\Program Files\Apache Software Foundation\Apache2.2\htdocs\test.thtml, and navigating to http://localhost:8080/test.thtml in Chrome, I was greeted with this:

The lights are on, but no-one's home.

I resorted to altering the WitangoModule path to refer to the absolute path of the module’s DLL file (““C:\Program Files\WitangoServer\5.5\PlugIns\”” – a single pair of double quotes are used within httpd.conf), and restarting Apache yet again, only to receive the same “Client Error” message again.

According to the installation guide – which doesn’t even ship with the WiTango Server product, a Windows service (“Witango Server 5.5“) is supposed to be running (although it was never started in my case).

Unfortunately, the ReadMe never stated that, but I’m sure that it was obvious to the developers…

That aside, after starting the aforementioned service, I was greeted with this:

I guess that I can call it a success, given that I’m unfamiliar with the Tango/WiTango language, and haven’t got around to trying the associated developer tools yet…