Unsuccessful ClojureScript Quick-Start

CLOJURESCRIPT HAS BEEN something I’ve been following on the sidelines for quite a while now. Poking a bit around with React at work, let my interest towards Om.Next. Contemplating on creating my own web-app, I wanted to give this lispy thing a go. The Om.Next documentation led me to the ClojureScript quick-start guide.


Tutorial: Create structure and file

  mkdir -p src/hello_world;touch src/hello_world/core.cljs


Me copy-pasta

  mkdir -p src/hello_world;touch src/hello_world/core.cljs



Tutorial: edit the src/hello_world/core.cljs to look like the following:

(ns hello-world.core)
(enable-console-print!)
(println "Hello world!")



Me copy-pasta

ns hello-world.core)
(enable-console-print!)
(println "Hello world!")



Tutorial: Add the following Clojure code to build.clj

(require 'cljs.build.api)
(cljs.build.api/build "src" {:output-to "out/main.js"})


Me copy-pasta

(require 'cljs.build.api)
(cljs.build.api/build "src" {:output-to "out/main.js"})



Tutorial: build it

java -cp cljs.jar:src clojure.main build.clj


Me copy-pasta

java -cp cljs.jar:src clojure.main build.clj



Tutorial: Create a file index.html

<html>
    <body>
        <script type="text/javascript" src="out/main.js"></script>
    </body>
</html>

Me copy-pasta

<html>
    <body>
        <script type="text/javascript" src="out/main.js"></script>
    </body>
</html>



Tutorial: open in browser and see error:

Uncaught ReferenceError: goog is not defined


Me

Check



Tutorial: modify index.html

<html>
    <body>
        <script type="text/javascript" src="out/goog/base.js"></script>
        <script type="text/javascript" src="out/main.js"></script>
        <script type="text/javascript">
            goog.require("hello_world.core");
            // Note the underscore "_"!
        </script>
    </body>
</html>

Me copy-pasta

<html>
    <body>
        <script type="text/javascript" src="out/goog/base.js"></script>
        <script type="text/javascript" src="out/main.js"></script>
        <script type="text/javascript">
            goog.require("hello_world.core");
            // Note the underscore "_"!
        </script>
    </body>
</html>



Tutorial: Refresh your index.html and you should finally see “Hello world!”

Me

base.js:677 goog.require could not find: hello_world.core
    goog.logToConsole_ @ base.js:677
    goog.require @ base.js:709
    (anonymous) @ index.html:6
base.js:711 Uncaught Error: goog.require could not find: hello_world.core
    at Object.goog.require (base.js:711)
    at index.html:6
goog.require @ base.js:711
(anonymous) @ index.html:6

Wut???
Being a first attempter hello-world implementer surely doesn’t make one familiar with the error messages of a new compiler. So I was somewhat perplexed on what was wrong, what to do, how to debug and generally where to look

Did a list of files, although I had no idea what to look for

hello-world» find
.
./build.clj
./out
./out/main.js
./out/goog
./out/goog/math
./out/goog/math/integer.js
./out/goog/math/long.js
./out/goog/debug
./out/goog/debug/error.js
./out/goog/reflect
./out/goog/reflect/reflect.js
./out/goog/dom
./out/goog/dom/nodetype.js
./out/goog/deps.js
./out/goog/asserts
./out/goog/asserts/asserts.js
./out/goog/base.js
./out/goog/string
./out/goog/string/stringbuffer.js
./out/goog/string/string.js
./out/goog/array
./out/goog/array/array.js
./out/goog/object
./out/goog/object/object.js
./out/cljs
./out/cljs/core.cljs
./out/cljs/core.js.map
./out/cljs/core.js
./out/process
./out/process/env.js
./out/process/env.js.map
./out/process/env.cljs
./out/process/env.cljs.cache.json
./cljs.jar
./index.html
./src
./src/hello_world
./src/hello_world/core.cljs

Look okay I guess. If anything, only the file I’ve could have messed up would besrc/hello_world/core.cljs, but it seemed correct. I then began cat’ing files and found abvious nothing wrong. A bit frustrated, I followed the next steps in the tutorial which refactored some of the build and dependency stuff. The same error however still appeared.

For reasons unbeknownst to my self, I then opened the file in VSCode – the colorcoding immediately revealed something was wrong. The top line of the clojurescript file was not colored like the others

ns hello-world.core)
(enable-console-print!)
(println "Hello world!")

Great Scott! I’ve made a copy-pasta error and missed a begin-parens :-O

Wrong
ns hello-world.core)

Right

(ns hello-world.core)

Here I was cursing and swearing at the quick-guide. I was definit sure that I’d done it right, but alas a syntax violation was committed

I do wonder though, why the compiler did not error out on this!??

A Search for BASH Scripting Alternatives

A SEARCH FOR BASH Scripting Alternatives

 

TOC
1. Introduction
2. Not Meeting Criterias
3. Shells and Shell Languages
4. Scheme Languages
5. Scripting Languages
6. Finalists
7. Conclusion
8. Appendix

 
 
Readers note: this blog post did not convert entirely correct to WordPress. Consider reading original edition at: https://github.com/monzool/A-Search-for-BASH-Scripting-Alternatives
 
 

1. Introduction

 
Bash scripting is http://mywiki.wooledge.org/BashWeaknesses

It is no secret that bash scripting has its pitfalls and oddities. Subtle mistakes are easy to make given its many quirks and arcane syntax rules. So this documents seek to investigate if a language can be found which embodies the “Bash – the good parts” only.

The idea for this document originated at my place of work. Our products are linux based, and in the development department all local workstations are Ubuntu installations, so bash scripting is a part of the heritage that automatically comes from using a linux system. Most co-workers will use bash for a quick means to and end to automate some build steps or tweaking testing facilities. A minor group use bash for configuring and tweaking an embedded linux platform. Bash scripts can be simple, but most often scripts grow in size and complexity over time and bugs will sneak in. Other times a script is rushed and the implementer is fooled by the, at first sight, simplicity of bash and falls into its many pitfalls and bugs appear in even small scripts.

It is my experience that most bash scripts have bugs or are of low quality. Writing correct and safe bash scripts is hard. Having personally written a rather large umbrella build system in bash for one of our more advanced products, it became clear to me that an alternative to bash needs to be embraced.

 

Criterias

  • The language, libraries and syntax should be sane, clean and unambiguous. No gotchas.
  • Multi arch. Must support Linux on i368, x64, armv7 and armv5te.
  • Terse. Not much cruft and boilerplate to get things done
  • Should be interpreted (intermediate compiling allowed) and require no cycle of compile + copy to target. It should be possible to modify a script on target and run it.
  • Dependencies. Scripts should be self contained, or have easily identifiable dependencies (libraries, modules).
  • REPL. Interactive shell is not required – but a bonus if available.
  • System calls. It should allow calling external tools (like make, grep etc.). This might be with a sane DSL or using a call mechanism that allows capturing output from bash syntaxed executions. Direct calls to operating system (or low level libraries) is not a requirement.
  • Scoping. No dynamic scoping. This eliminates interesting candidates like picolisp and newLISP, but dynamic scope in bash is cause for many confusing errors, so lexical scoped languages only.
  • Must be production ready, i.e. mature and preferably more that one maintainer.
  • Must have a license that is permissive of commercial usage and embedding.

A secondary objective is to find an embeddable scripting language. If the language could work as both a native scripting language and an embeddable scripting language, it would be preferable. It must embed into a C/C++ ecosystem.

 

2. Not Meeting Criterias

 

Perl

This contender actually has a lot of proven history, and surely would do the task at hand. But the Perl stack is rather large. There are smaller options like microperl[1] and miniperl, tinyperl[2], but generally it seems that the recommendation is to look elsewhere than Perl, if targeting a small embedded platform[3]. There is also the more subjective argument, that perl is a bit dated and “unsexy” language with complex syntax.

What speaks in perls favor, is that its inherently very suited for scripting, and have a long pedigree as a more versatile alternative for bash.

Various techniques known from bash is supported, e.g.: HEREDOC

#!/usr/bin/perl
$a = 10;
$var = &lt;&lt;"EOF";
Multiline text string
in a HEREDOC section.
Value of a=$a
EOF
print "$var\n";

File handling is also very similar to the shell. To rename a file:

#!/usr/bin/perl
rename ("/usr/test/file1.txt", "/usr/test/file2.txt" );

However perl also have some of the annoying features of bash, e.g. like with passing lists to subroutines:

#!/usr/bin/perl
sub PrintList{
   my @list = @_;
   print "Given list is @list\n";
}
$a = 10;
@b = (1, 2, 3, 4);

PrintList($a, @b);

# =&gt; Given list is 10 1 2 3 4

Links:

  • [1] microperl
  • [2] tinyperl
  • [3] http://stackoverflow.com/questions/2461260/is-there-some-tiny-perl-that-i-can-use-in-embedded-system-where-the-size-would-m

 

python

Python is an easy learned language and is very versatile. Good shell script libraries and abstractions exists for python[1,2,3,4]. The best might be xonsh[5] which provides a pythonic interactive shell, as wells as easy access to system programs like grep and find etc.

Given its “batteries included” profile, Python is quite heavy in the install size, and thus not fitting well on an embedded system. Only a few alternatives exists for reduced size python implementations. An old (unmaintained, batteries not included) distribution tinypy[6] and eGenix PyRun[7] which packs a python distribution down to about 11 MiB.

Links:

 

TCL

This script language is in the kind of Perl being very versatile, having lots of libraries but also suffer from large install size and also having a syntax feels a bit outdated and unfriendly. TCL has two functions for calling system commands: open and exec which provides for system calls [1]. It is not a seamless port from bash as the commands have their own special syntax:

exec ls {*}[glob *.tcl]

First impressions of TCL’s pipelining handling is that it is rather cumbersome[2]

The TCL language is very flexible in its dynamic nature and do enable it to extend itself in a lisp like way. Opinions of TCL is many and contradicting3

Jim[4] is a smaller edition of TCL, so the size impact of a TCL installation could be reduced if using that.
A few other languages target the TCL vm. One such is Little[5] which have a nicer abstraction upon the exec call.

Links:

 

cash

Requires the node.js stack and is not suited for the targeted embedded platform.

Links:

 

Ammonite

Sits on top of the JVM as is thus not applicate for the targeted embedded platform. Also, the syntax seems clumsy and verbose.

Links:

 

Elixir

Elixir uses escript to build an binary with Elixir merged in. Then that binary requires the Erlang vm engine to execute. Have not looked into the size of minimal Erlang installation, as the language itself do not fit the intended purpose.

Links:

 

shcaml

shcaml reminds a lot of scsh. It provides a wide range of user functions, system call wrappers and bash like functionality

Unix shells provide easy access to Unix functionality such as pipes, signals, file descriptor manipulation, and the file system. Shcaml hopes to excel at these same tasks.

With its _UsrBin_[1] module, often used function like ls, ps etc. are available.

The _Fitting_[2] module emulates common shell behaviors. E.g: redirecting to /dev/null can be done

run (command "echo hello" /&gt;/ [ stdout /&gt;* `Null ]);;

Links:

 

Julia

Julia has pretty good interop with shell commands built in[1]. But unfortunately its not that a light weight installation.

Links:

 

Wren

Wren is a very attractive language with great syntax and excellent C/C++ interop. It is written in C and is very portable. Wren can be used both standalone or embedded.

Unfortunately I could not find any evidence of it being able to do system calls. The system module[1] in its core library seems limited to mainly screen printing.

The documentation is also very incomplete. For example the chapter on how to use it embedded in C:

Calling a C function from Wren #
TODO

Calling a Wren method from C #
TODO

Storing a reference to a Wren object in C #
TODO

Storing C data in a Wren object #
TODO

Links:

 

Dart

Good support for stdout, stderr, stdin and environment[1] The Process.run[2] provides convenient access to system tools.

Process.run('grep', ['-i', 'main', 'test.dart']).then((result) {
  stdout.write(result.stdout);
  stderr.write(result.stderr);
});

Dart also has a solution for handling pipes. Albeit this is not as seamless compared to how bash does it, but it seems to work well.

import 'dart:io';

main() async {
  var output = new File('output.txt').openWrite();
  Process process = await Process.start('ls', ['-l']);
  process.stdout.pipe(output);
  process.stderr.drain();
  print('exit code: ${await process.exitCode}');
}

Dart looks very appealing and have a wide range of good libraries as well. I could not find clear specs on how much space the vm can be shrinked down to, but as the downloadable dart engine is 11 MB is not likely that it can be shrinked enough. The deal-breaker is however a passage, stating that the targeted cpu only have experimental support:

has experimental support ARMv5TE [4]

Links:

 

duktape

JavaScript is among the hottest languages right now. Several embeddable JavaScript engines exists, but what appears to be the most used in embedded scenarios, is the duktape engine.

It opens up to an alternative strategy of using one of the many JavaScript transpilers to do scripting in an entirely different language – although the level of indirection would probably be to high ;-)

Although the scripting facilities and embedding in C/C++ works well, there is no solid support of standalone system-near scripting.

License: MIT

Links:

 

3. Shells and Shell Languages

Shell languages are languages that either extend an existing shell (like bash), or implements their own shell and provide an alternative shell scripting language. Common is that this concept does not provide embedding into C/C++.

 

modernish

modernish is a library written in POSIX compatible shell script, that augments the shell scripting languages with sane and better functions for most functionality.

Links:

 

powscript

powscript is a language that transpiles to bash. This makes it very portable – in theory. Unfortunately it targets bash-4 only

written/generated for bash >= 4.x, ‘zero’-dependency solution

A lot of features have been added to bash-4, and the busybox shell (ash) does not support many of the new features and syntax additions.

Apart from the non-portable issue for this particular purpose, its a very appealing concept and it boasts some nice features also[1]:

Links:

 

NGS

NGS is a bash replacement. The project seems primarily focused towards AWS (Amazon) usage by providing interaction on object level. Also focus is on doing scripting in a custom GUI called small-poc1

Links:

  • NGS
  • NGS blog
  • [1] http://www.youtube.com/watch?v=T5Bpu4thVNo

 

oh

Oh is a bash replacement that describes its scripting language as:

a heavily modified dialect of the Scheme programming language, complete with first-class continuations and proper tail recursion.

The documentation1 is in the better end of “home brewed” shells, but is still pretty sparse.

Oh is written in go and have automated tests for a wide range of architectures.
Compared to bash the features are limited in some areas, improved in others2

I cross compiled oh as follows

env CC=/usr/local/arm-2007q3/bin/arm-none-linux-gnueabi-gcc CGO_ENABLED=1 GOARM=5 GOARCH=arm go build -v github.com/michaelmacinnis/oh

The compiled binary is about 2.8 MB. A bit much perhaps, but acceptable.
Unfortunately when executing oh on the destination target, it would endlessly fail with some futex error:

» strace oh
...
futex(0x2cab4c, FUTEX_WAIT, 0, NULL)    = -1 EAGAIN (Resource temporarily unavailable)

Links:

  • oh
  • [1] https://github.com/michaelmacinnis/oh/blob/master/doc/manual.md
  • [2] https://htmlpreview.github.io/?https://raw.githubusercontent.com/michaelmacinnis/oh/master/doc/comparison.html

 

elvish

elvis is a bash replacement with some very interesting ideas about scripting. It have, for example, some very nice concepts like named arguments

~&gt; fn square [x]{
     * $x $x
   }
~&gt; square 4
▶ 16

The project appears to be very young and implementation seems (for now) to be mainly focusing on the interactive shell part.
Documentation is quite sparse. Some information can be found on the atom-feed1 some on the github2 site and other on the main webpage3

It is hard to get a real indication of the maturity of the project. Documentation is sparse, no test suite seems to exist but the main committer appears reasonable active in the last 4-5 years.

I cross compiled elvish as follows

env CC=/usr/local/arm-2007q3/bin/arm-none-linux-gnueabi-gcc CGO_ENABLED=1 GOARM=5 GOARCH=arm go build -v github.com/elves/elvish

The stripped binary weights in at a hefty 4.6 MB.

Being a read-only filesystem, I had to relocated $HOME to a writable directory

arm-target» env HOME=/var/ elvish

Elvish starts just fine, and seems functional. For the cross-compiled edition, only exception is user input which writes every input key at the same line column. This makes it pretty unusable for a shell replacement, but should still allow to be used for scripting purposes.

Links:

  • elvish
  • [1] https://elvish.io/feed.atom
  • [2] https://elvish.io/learn/quick-intro.html
  • [3] https://github.com/elves/elvish

 

oilshell

An interesting project of a total bash replacement with implementation decisions discussed in detailed blog posts by the author. The level of meticulously documentation, testing and benchmarking done by the author is quite impressive.
Not production ready and not sure if the, build with python technology, disqualifies it for small embedded platforms anyway.

The author of oil maintains an excellent list of shells and shell scripting languages1

Links:

  • oilshell
  • [1] https://github.com/oilshell/oil/wiki/ExternalResources

 

murex

murex is a rather young project (first github commit in April 2017).

Murex is a cross-platform shell like Bash but with greater emphasis on writing safe shell scripts and powerful one-liners while maintaining readability.

What murex really gets right is the terseness and expressiveness with few artifacts. Its very much the power of bash, just better and safer.

As with the other go-lang based shells, the readline functionality is not working in the cross-compiled edition.

 

4. Scheme Languages

A dedicated section is about Schemes. This is because scheme in general fits perfectly regarding the criterias given for the bash replacement. But at the same time, there are many many variants of scheme, all with their own pros and cons.

 

Chibi

chibi is the unofficial reference R7RS implementation of scheme. It also have attractive features like a ffi generator and a good package manager (snow). Its small in size and handles both scripts and embedding into C/C++ well.

chibi also supports calling subprocesses:

&gt; (import (chibi) (chibi process))
&gt; (system "ls" "/usr/")
bin  games  include  lib  local  sbin  share  src

The documentation is quite spares, or practically non-existing. This is from the process documentation page:

(execute cmd args)
(execute-returned cmd)
(system cmd . args)
(system? cmd)

This might (?) be enough for a seasoned schemer, but for the intended target of scheme n00bs, this is not good enough.

As an example, I still have to figure out what the difference between execute and system is. Not to mention, figure out how to run the command

&gt; (execute "/usr/bin/whoami" '())
A NULL argv[0] was passed through an exec system call.
[1]    21444 abort (core dumped)  LD_LIBRARY_PATH=./lib ./bin/chibi-scheme

chibi is accompanied with very few examples and unit-tests. Along with the lack of documentation it is not giving a particular good first impressions about the project. Never the less, it does seem to be a very widespread used and praised scheme.

 

chez

chez was a commercial closed-source scheme until Cisco open-sourced it. Its a R6RS scheme with the optional capability of generating compiled output. The documentation is excellent1

./chez » ./bin/petite
Petite Chez Scheme Version 9.4
Copyright 1984-2016 Cisco Systems, Inc.

&gt; (system "ls")
bin  lib  share
0

Installation size:

./chez » du -chs bin lib/csv9.4/i3le
316K    bin
2.1M    lib/csv9.4/i3le
2.4M    total

A first glance cross-compiling seems a bit “home grown” and being dependent on having target specific description files2. It is unclear if the one existing arm target could be used.

chez can run as an interpreter, but that eliminates some functionality like foreign calls and others3.

Links:

  • [1] http://cisco.github.io/ChezScheme/csug9.4/csug.html, http://www.scheme.com/tspl4/
  • [2] https://github.com/cisco/ChezScheme/issues/7, https://github.com/cisco/ChezScheme/issues/13
  • [3] Building and Distributing Applications

 

gauche

I’ve been looking into this version of Scheme before. It is very actively maintained and have a substantial amount of useful libraries. It boasts upstart times matching that of bash, which makes it an interesting alternative – especially in cases where rapid script execution is key.

There is no scsh equivalent for gauche, but instead gauche have a subprocess library[1] that makes process calls somewhat easier. An example of a subprocess call with pipes [2]:

(run-process-pipeline '((ls -l) (grep "\\.[ch]$") (wc)) :wait #t)

From comments in a gauche commit[3], it looks like a scsh alike interface is on the wish list

+;; We might adopt scsh-like process forms eventually, but finding an
+;; optimal DSL takes time.  Meanwhile, this intermediate-level API
+;; would cover typical use case...
+(define (run-process-pipeline commands

Such a DSL would be a great asset, because there are gotchas with some scheme vs shell syntax collisions. One such is mentioned in the subprocess documentation

Note that -i is read as an imaginary number, so be careful to pass -i as a command-line argument; you should use a string, or write |-i| to make it a symbol.

(run-process '(ls "-i"))

The above is an example of what scsh eliminates.

Looking at the Ubuntu packaging of gauche, its size bloats a bit

» du -shc /usr/bin/gosh /usr/bin/gauche-cesconv /usr/share/gauche-0.9 /usr/lib/gauche-0.9
24K     /usr/bin/gosh
4.0K    /usr/bin/gauche-cesconv
2.3M    /usr/share/gauche-0.9
5.3M    /usr/lib/gauche-0.9
7.5M    total

A total of 7.7 MB seems quite too much. It is unclear to me if some or more of the share files and libraries can be excluded from a final target installation

Besides… an often reoccurring issue with Gauche is that it won’t compile. This time around:

» ./configure --enable-multibyte=utf-8 --enable-tls=none --with-dbm=no --prefix=/usr

make[1]: Entering directory '/home/skv/public/src/Gauche-0.9.5/lib'
if test -f /usr/share/slib/require.scm &amp;&amp; test i686-pc-linux-gnu = i686-pc-linux-gnu ; then \
  /usr/bin/gosh -ftest -uslib -E"require 'new-catalog" -Eexit;\
fi
gosh: "error": Compile Error: failed to link ../src/srfi-13.so dynamically: ../src/srfi-13.so: undefined symbol: Scm_MakeExtendedPair
"../lib/slib.scm":8:(define-module slib (use srfi-0) (us ...

Links:

 

5. Scripting Languages

 

squirrel

Squirrel reminds a lot of Lua. It is tailored for embedding into C/C++ programs, but can be used as a standalone scripting language as well.

squirrel’s syntax is similar to C/C++/Java etc… but the language has a very dynamic nature like Python/Lua etc…

Much effort have been put into shrinking the installation size

both compiler and virtual machine fit together in about 7k lines of C++ code and add only around 100kb-150kb the executable size

The system[1] library provides limited functionality like date, getenv and functions to delete and rename files. Otherwise a system function is the only access to system calls. The I/O[2] library provides basic file streaming operations.

 

mruby

mruby is a lightweight implementation of the Ruby language. It supports a multitude of execution models[1]. It can run as a script, be embedded in C/C++ or, if to not distribute the source code, it can be compiled to a bytecode format.

A lot of extension libraries[2] exists for mruby. In context of this document, one of the more amusing is an extension that allows mruby to execute Lua[3].

Like Lua, mruby only provides the core of a script language. It appears that many expected features, like e.g. convenient file and directory handling, environment manipulation and errno, are provided by community libraries.

While mruby promises ruby compatibility, a blog post from 20144 complains about the heartaches of porting ruby code to mruby.

An example from the blog:

$ ruby -e "p File.join ['a', 'b', 'c']"
"a/b/c

vs.

$ mruby -e "p File.join ['a', 'b', 'c']"
["a", "b", "c"]

The target installation process is a bit unconventional – but actually is quite useful for a static embedded environment. During build of mruby all desired extensions and libraries are set in the build configuration, and the ending build will include only what was enabled. An answer on the arduino forum formulates its like this:

Main advantage of mruby over ruby is size, which can be crucial on embedded systems. The sole mruby executable weights 2.2Mb and it is completely self contained, including most of the standard library, plus commodities like RegExp, IO, Socket, File, and the Yun module. The mruby-gems, rather than being loaded and parsed runtime, are precompiled into byte code at build time, and directly statically linked into the executable. Once you have a cross-build system is rather easy to build a custom mruby interpreter with a custom set of gems. Furthermore, mruby-gems can easily mix methods implemented in C or mruby in the same class/module.

Further there is a comment on the C interface:

Finally, mruby C APIs are much more easy than C interfaces for ruby, and I’d say even marginally easier than python and lua interfaces. Which makes really easy to build C executables that embed an mruby interpreter, or to build C extensions to the standard mruby interpreter.

mruby have its own gem variant mgem[5]. Currently its a tool that is based on a manually maintained list of available mruby gems.

To install mgem:

gem install mgem

A main reason for even looking a mruby, is the reputation of the scripster[6] library that runs on ruby. It is an excellent abstraction to make shell commands seem as first class citizens in the ruby language.
mruby does not inherently support require. This means a lot of scripts wont run out of the box (including scripster). An mgem plugin ‘mgem-require’ however exist to provide this functionality. To use this, it is required that the support is compiled in into mruby[7].

The build system of mruby is a NIH’ed non-standard system. Its simple for a default standard build, but gets a bit confusing when it comes to the cross-compiling. The library edition of mruby cross-compiles simply to a static library. But if wanting it as a shared library you are out of luck8. Following the cross-compile guide does not output a cross-compiled instance of the mruby interpreter. Adding the binaries specifically for generation in build_config.rb takes care of that though:

conf.gem "#{root}/mrbgems/mruby-bin-mruby"
conf.gem "#{root}/mrbgems/mruby-bin-mirb"

See appendix A for my example of a cross-compile configuration

License: MIT

Links:

  • mruby
  • [1] http://mruby.org/docs/articles/executing-ruby-code-with-mruby.html
  • [2] mruby libraries
  • [3] https://github.com/dyama/mruby-lua
  • [4] http://gromnitsky.blogspot.dk/2014/09/porting-code-to-mruby.html
  • [5] https://github.com/bovi/mgem
  • [6] ruby-scripting
  • [7] https://github.com/mattn/mruby-require#install-by-mrbgems, http://stackoverflow.com/a/30794049
  • [8] https://github.com/mruby/mruby/issues/1666

 

scsh

scsh is written in Scheme48 which originates back in 1986, but is still actively updated. Scheme48 is written in PreScheme which is a statically-typed dialect of Scheme.
By own emission, scsh is not suitable for an interactive shell, as many features for this is not implemented yet. However scsh is a very complete and well thought abstraction over the requirements and needs used in scripting and calling system commands.

Scsh spans a wide range of application, from “script” applications usually handled with perl or sh, to more standard systems applications usually written in C

scsh have very extensive system support, e.g. functions for networking, string manipulations, regex, file manipulations and many more exists. Also more specialized features as creating fifos and file locks are supported. scsh provides its own abstraction over system calls and have parted from traditional error handling, and instead opted for an exception based error mechanism.

System call error exceptions contain the Unix errno code reported by the system call. Unlike C, the errno value is a part of the exception packet, it is not accessed through a global variable.

It is clear that the creators of scsh have great knowledge of how the underlaying OS works and operates, and have gone lengths to chose sane default behaviors. Examples are the mentioned decision, to consistently use exceptions to allow free use of return values. Another example is the handling of EINTR for which the wanted solution you almost always want, is the exact default chosen:

System calls never return error/intr – they automatically retry.<

A nice feature of scsh is the optional feature of compiling the scripts to either byte-codes (heap image) or native binaries.

Scsh programs can be pre-compiled to byte-codes and dumped as raw, binary heap images. Writing heap images strips out unused portions of the scsh runtime (such as the compiler, the debugger, and other complex subsystems), reducing memory demands and saving loading and compilation times. The heap image format allows for an initial #!/usr/local/lib/scsh/scshvm trigger on the first line of the image, making heap images directly executable as another kind of shell script.
Finally, scsh’s static linker system allows dumped heap images to be com- piled to a raw Unix a.out(5) format, which can be linked into the text section of the vm binary. This produces a true Unix executable binary file.

All the above sounds very promising. The prevailing comments from people using scsh in real life is however, that it has some annoying problems when used as a script language1. Its module inclusion is not really trailered for being called as a shebang script. Error messages are missing location of origin and debugging is quite cumbersome. Generally debugging scsh scripts is though of as a major PITA.

Links
* https://scsh.net
* https://github.com/scheme/scsh
* [1] http://www.lysium.de/blog/archives/215-Why-I-dont-use-scsh-as-a-scripting-language-anymore.html
* Scripting with Scheme Shell by Rudolf Olah

 

luash

Lua is in it self a great scripting language. It can be used standalone but also have a vast pedigree of being embedded into c/c++ code where performance is a priority.

Lua in it self can call system programs using os.execute, or if output is needed, io.popen

local ps = assert(io.popen("/bin/ps ax | grep '/sbin/init'"), 'r')
local val = ps:read("*a")
print(val)

The above does the job, but its not that clean and soon you’ll find yourself writing wrappers to make many successive system calls more nicer looking. This is one of the things that luash provides.

In the next example luash takes advantage of a syntactic feature in Lua where, if the argument of a function is a string or a table constructor, the parens can be omitted. This makes for a nice clean syntax when calls are simple:

-- $ ls /bin | grep $filter | wc -l
ls '/bin' : grep filter : wc '-l'

In other situations, the syntax is not that clean

local words = 'foo\nbar\nfoo\nbaz\n'

-- $(echo ${words} | sort | uniq)
local u = uniq(sort({__input = words}))
print(u)

Due to some conflicts between Lua and bash some commands need to be wrapped in a command call.

local chrome = sh.command('google-chrome')   -- because '-' is an operator

The sh.command functionality does however allow for some interesting composites.

local gittag = sh.command('git', 'tag')   -- gittag(...) is same as git('tag', ...)
gittag('-l')   -- list all git tags

A shortcoming of Lua is a weak/lacking standard library[1]. LuaRocks, LuaDist and others now provide downloading managing of various libraries, but this does not transcend very well into small embedded platforms and cross compiling. A general problem of Lua is the fragmentation between LuaJit, Lua-5.1, Lua-5.2 and Lua-5.3. Incompatibilities between those versions cause a lot of packages to only work on some of those. Also a prevailing problem in the Lua ecosystem is dormant unmaintained packages. The fragmentation and relatively small package repository is considered a major problem within the community.

See appendix B for my example of a cross-compile configuration
As with elvish, the repl line reader is completely borked and unusable. Not sure why this is…

The flexibility in the few rules in lua makes it somewhat akin to javascript, and makes it interesting as a transpiler target.

MoonScript[2] is an object oriented language that is inspired by coffeescript.

It can be loaded directly from a Lua script without an intermediate compile step. It even knows how to tell you where errors occurred in the original file when they happen.

MoonScript development have somewhat stalled, but the author of the language is running his own company build on MoonScript technology[3].

Urn[4] brings the world of lisp to the lua vm. It is a relative new project, but seems in good progress.

The multi-transpiling language Haxe also have a backend that generates Lua code[5].

License: MIT
Lua license: MIT

Links:

  • luash
  • example.lua
  • http://notebook.kulchenko.com/programming/lua-good-different-bad-and-ugly-parts
  • [1] https://www.tutorialspoint.com/lua/lua_standard_libraries.htm
  • [2] MoonScript
  • [3] https://news.ycombinator.com/item?id=14441758
  • [4] Urn
  • [5] https://haxe.org/blog/hello-lua

 

6. Finalists

 

scsh

from a technical point of view, scsh is by far the solution that appear most complete and best suited. It is dedicated for shell scripts and does it well. Its library and functions give a huge range of of supported functionality. The obvious downside, is that the scheme language is not well known within the focus group, and will most likely discourage many people from using it. Also the PITA situation of debugging scripts is a major drawback.
Using scsh would spawn the logic step of using a scheme based embedded language for mixing with C/C++ also. Scheme48 (upon which scsh is based) or chibi would be the best choices.
The language of scheme is very interesting academically, but during the writings of this document is it clear that “googling” for actual scheme snippets, help/hints and various library wrappers return very sparse and low quality results. Contrary to this is mruby and especially lua where snippets, hints and wrappers are abundantly available.

 

luash

luash is very attractive in being a Lua library. Lua is pretty easy to learn and use, and would be more easy to adopt that e.g. the scheme based scsh
Lua is praised for being a simple language but with great flexibility. It is certainly possible to build large programs in Lua, but it is also clear from digesting the comments on the world-wide-web, that the simplicity is also the acillies heel of the language, in forcing “do it yourself” solutions and patterns on everything. Its small and efficient, easy to learn – but a tad cumbersome and restricted. Bringing package dependencies into the mix, complicates the usage considerably.

 

mruby

The question of “why not python”, will be inevitable as that is the main scripting language used within the focus group. Python is much utilized in test scripts, automation and build systems, but fact is that Ruby has just as much to offer and provides same levels of flexible, exciting and productive scripting. The mruby community is really active and structured, and are in good progress of cloning the well known Ruby tools into the embedded mruby equivalent. Besides, no python derivative equivalent of mruby exits. Of the languages examined, mruby is superior to most in simplicity, expressiveness and capability.

 

7. Conclusion

TL;DR: Winner is: lua (and mruby)

A true native scripting spirited project like elvish, oh and the like would have been preferred. From trying out various languages it is clear to me that bash scripting is a special kind of domain, and most scripting languages does not transcend especially well into that domain. Unfortunately the state of the tested alternatives are not worthy of production quality.

A split decision for an alternative bash scripting language is the conclusion for the this document. Of the available contenders, lua and mruby would do equally great. For general purpose simple scripting lua have a slight advantage given the luash library.

Regarding the secondary objective of finding an embedded scripting language, both mruby and lua will be excellent choices. The choice would mostly be about the extend that the scripting language is to be used. If the embedded scripting is to be a larger part of the model or logic, use the better language mruby. If embedding a scripting language is only for minor scripting facilities, go for the small and leaner language lua.

 

8. Appendix

 

Appendix A – Cross Compiling mruby

# Define cross build settings
if ENV['MRUBY_CROSS_TCA'] == "ffxav"
  MRuby::CrossBuild.new('ffxav') do |conf|

    toolchain :gcc

    tool_path = ENV['FFXAV_SDK_HOME']
    cgcc = "#{tool_path}/bin/arm-none-linux-gnueabi-gcc"
    car = "#{tool_path}/bin/arm-none-linux-gnueabi-ar"


    # C compiler settings
    conf.cc do |cc|
      cc.command = ENV['CC'] || cgcc
      puts("CC =&gt; #{cc.command}")
      cc.flags = ENV['CFLAGS'] || '-mtune=arm9tdmi -march=armv5te -O2 -std=gnu99 -pipe -fPIC -DPIC -DLINUX -DFFXAV'
      cc.include_paths = ["#{root}/include", 
                          "#{tool_path}/arm-none-linux-gnueabi/libc/usr/include",
                          "#{tool_path}/arm-none-linux-gnueabi/include/c++/4.2.1"]
    end

    # Linker settings
    conf.linker do |linker|
      linker.command = ENV['LD'] || cgcc
      linker.library_paths = ["#{tool_path}/arm-none-linux-gnueabi/libc/lib",
                              "#{tool_path}/arm-none-linux-gnueabi/lib"]
    end

    # Archiver settings
    conf.archiver do |archiver|
      archiver.command = ENV['AR'] || car
      archiver.archive_options = 'rs %{outfile} %{objs}'
    end

    # Enable compiling of binaries
    conf.gem "#{root}/mrbgems/mruby-bin-mruby"
    conf.gem "#{root}/mrbgems/mruby-bin-mirb"


    conf.gembox 'default'
  end
end

Adding the above section to build_config.rb will allow compiling with the command:

FFXAV_SDK_HOME=/usr/local/arm-2007q3 MRUBY_CROSS_TCA=ffxav minirake

 

Appendix B – Cross Compiling lua

export FFXAV_SDK_HOME=/usr/local/arm-2007q3

 

ncurses

TARGETMACH=arm-none-linux-gnuabi BUILDMACH=i686-pc-linux-gnu \
CC=${FFXAV_SDK_HOME}/bin/arm-none-linux-gnueabi-gcc \
CFLAGS='-mtune=arm9tdmi -march=armv5te' \
LD=${FFXAV_SDK_HOME}/bin/arm-none-linux-gnueabi-ld \
LDFLAGS='-mtune=arm9tdmi -march=armv5te' \
AS=${FFXAV_SDK_HOME}/bin/arm-none-linux-gnueabi-as \
CXX=${FFXAV_SDK_HOME}/bin/arm-none-linux-gnueabi-g++ \
./configure --prefix=/var/usr/local --without-ada --without-debug \
  --without-pkg-config --host=arm-none-linux-gnuabi

 

readline

CC=${FFXAV_SDK_HOME}/bin/arm-none-linux-gnueabi-gcc \
CFLAGS='-mtune=arm9tdmi -march=armv5te -I${WORK_PATH}/ncurses-6.0/install_dir/var/usr/local/include' \
LDFLAGS='-mtune=arm9tdmi -march=armv5te -L${WORK_PATH}/ncurses-6.0/install_dir/var/usr/local/lib' \
./configure --prefix=/var/usr/local --host=arm-none-linux-gnuabi

 

Lua

diff -Naur lua-5.3.4.orig/Makefile lua-5.3.4/Makefile
--- lua-5.3.4.orig/Makefile     2016-12-20 17:26:08.000000000 +0100
+++ lua-5.3.4/Makefile  2017-06-09 12:49:50.863809153 +0200
@@ -4,13 +4,13 @@
 # == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT =======================

 # Your platform. See PLATS for possible values.
-PLAT= none
+PLAT= linux

 # Where to install. The installation starts in the src and doc directories,
 # so take care if INSTALL_TOP is not an absolute path. See the local target.
 # You may want to make INSTALL_LMOD and INSTALL_CMOD consistent with
 # LUA_ROOT, LUA_LDIR, and LUA_CDIR in luaconf.h.
-INSTALL_TOP= /usr/local
+INSTALL_TOP= $(WORK_PATH)/lua/lua-5.3.4/var/usr/local
 INSTALL_BIN= $(INSTALL_TOP)/bin
 INSTALL_INC= $(INSTALL_TOP)/include
 INSTALL_LIB= $(INSTALL_TOP)/lib
diff -Naur lua-5.3.4.orig/src/Makefile lua-5.3.4/src/Makefile                           ```

```diff
--- lua-5.3.4.orig/src/Makefile 2015-05-27 13:10:11.000000000 +0200
+++ lua-5.3.4/src/Makefile      2017-06-09 13:34:27.000000000 +0200
@@ -4,22 +4,22 @@
 # == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT =======================

 # Your platform. See PLATS for possible values.
-PLAT= none
+PLAT= linux

-CC= gcc -std=gnu99
-CFLAGS= -O2 -Wall -Wextra -DLUA_COMPAT_5_2 $(SYSCFLAGS) $(MYCFLAGS)
-LDFLAGS= $(SYSLDFLAGS) $(MYLDFLAGS)
-LIBS= -lm $(SYSLIBS) $(MYLIBS)
+CC= $(FFXAV_SDK_HOME)/bin/arm-none-linux-gnueabi-gcc -std=gnu99 
+CFLAGS= -O2 -Wall -Wextra -DLUA_COMPAT_5_2 $(SYSCFLAGS) $(MYCFLAGS) -mtune=arm9tdmi -march=armv5te -I$(WORK_PATH)/readline-7.0/install_dir/var/usr/local/include -I$(WORK_PATH)/ncurses-6.0/install_dir/var/usr/local/include
+LDFLAGS= $(SYSLDFLAGS) $(MYLDFLAGS) -mtune=arm9tdmi -march=armv5te -L$(WORK_PATH)/readline-7.0/install_dir/var/usr/local/lib -L$(WORK_PATH)/ncurses-6.0/install_dir/var/usr/local/lib
+LIBS= -lm $(SYSLIBS) $(MYLIBS) -lncurses

-AR= ar rcu
-RANLIB= ranlib
+AR= $(FFXAV_SDK_HOME)/bin/arm-none-linux-gnueabi-ar rcu
+RANLIB= $(FFXAV_SDK_HOME)/bin/arm-none-linux-gnueabi-ranlib
 RM= rm -f

Scrambling BASH Script Contents

BASH SCRIPTS HAS the inherit property of script languages of being readable by the users.
Sometimes it is desireable to at least attempt to hide the inner details of a script.

There are a few possibilities to do at least a bit of scrambling of the contents of a script for the viewing end user.

The examples shown here are not real content hiding measures, but at first glance the end user will not have a clue of the content, and will have to do a bit of non-total-n00b work to get its contents descrambled.

Suppose we have a function that rolls the dice and if you hit a three, the scripts exits.

function roll_dice() {
    local sample_space=6
    local number=${RANDOM}
    let "number %= ${sample_space}"
 
    if [[ $number == 3 ]]; then
        echo
        echo "Better luck next time! Game over - you loose :-P"
        echo
        exit ${number}
    fi
}
roll_dice

For this first example base64 is used. It has the nice properties of converting any
input to a sequence that contains only plain ascii characters.

If the above is saved in a script roll_dice.txt, the following command will base-64 encode it

cat roll_dice.txt | base64

As output from this command is printed the base64 encoded data:

ZnVuY3Rpb24gcm9sbF9kaWNlKCkgewogICAgbG9jYWwgc2FtcGxlX3NwYWNlPTYKICAgIGxvY2Fs
IG51bWJlcj0ke1JBTkRPTX0KICAgIGxldCAibnVtYmVyICU9ICR7c2FtcGxlX3NwYWNlfSIKCiAg
ICBpZiBbWyAkbnVtYmVyID09IDMgXV07IHRoZW4KICAgICAgICBlY2hvCiAgICAgICAgZWNobyAi
QmV0dGVyIGx1Y2sgbmV4dCB0aW1lISBHYW1lIG92ZXIgLSB5b3UgbG9vc2UgOi1QIgogICAgICAg
IGVjaG8KICAgICAgICBleGl0ICR7bnVtYmVyfQogICAgZmkKfQpyb2xsX2RpY2U=

The data can now be used in a script and evaluated. To descramble the data
back to its original bash content, the base64 command is used again, but in reverse.

This makes it nothing but data though, so to have it evaluated (executed), the
eval command is used to runtime evaluate the code loaded to the dice variable.

#!/bin/bash
 
dice=$(base64 -d <<'EOF'
ZnVuY3Rpb24gcm9sbF9kaWNlKCkgewogICAgbG9jYWwgc2FtcGxlX3NwYWNlPTYKICAgIGxvY2Fs
IG51bWJlcj0ke1JBTkRPTX0KICAgIGxldCAibnVtYmVyICU9ICR7c2FtcGxlX3NwYWNlfSIKCiAg
ICBpZiBbWyAkbnVtYmVyID09IDMgXV07IHRoZW4KICAgICAgICBlY2hvCiAgICAgICAgZWNobyAi
QmV0dGVyIGx1Y2sgbmV4dCB0aW1lISBHYW1lIG92ZXIgLSB5b3UgbG9vc2UgOi1QIgogICAgICAg
IGVjaG8KICAgICAgICBleGl0ICR7bnVtYmVyfQogICAgZmkKfQpyb2xsX2RpY2U=
EOF
)
eval "${dice}"
 
echo "Welcome, you lucky one"

There are several reasons why this is more a fun/prank trick, than an actual
security measure. But if you want, the same trick can be used with gpg. This
will disallow users to run or decode the scrambled script lines unless the passphrase is known.

Firstly create a gpg encoded script snippet. When run, gpg will prompt for encoding passphrase.

» gpg -ac -o- <<'EOF' | xclip -selection clipboard
echo "I am encrypted"
EOF

As the key is required for executing the encrypted parts, the bash script needs the passphrase.
Therefore there is no way to hide the key, which eventually means that the end user will always
have a means to decode the script before executing. Alternatively, let gpg ask for the passphrase
and anyone knowing the correct passphrase will be able to run its hidden content.

In the script below the user will be prompted for the decoding passphrase before being able to execute the encoded section.

#!/bin/bash
 
echo "To learn the secret, you must know the passphrase"
 
secret_message=$(gpg -d <<'EOF'
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1
 
jA0EBwMCJ+zlMDPCUlBg0ksBUQm4AstVcHFIluhT8Og0RA83X5s6p54JWisJz/mk
xgdBFTcMXYto0fHgT2N4vC0BFog39IFDp6oMXRjtI1Quv1YQx4HTmVaRYeA=
=z7wJ
-----END PGP MESSAGE-----
EOF
)
eval "${secret_message}"

Electrolux UltraSilencer Review

THE ELECTROLUX ULTRASILENCER vacuum cleaner (model ZUSORIGDB+).

Silencer Head

The Machine

Silencer Head

It definitely is a very beautiful and streamlined machine, and it has a very high degree of quality feel to it.
The hose, pipe and handlebar is very good quality, makes no noise and is very comfortable to handle. The power of it is extremely good and its power adjustment works surprisingly linear.

The actual use of it, does not measure up to its visual appearance however. I think the mountpoint of the hose is to vertical and placed to long into the machine. The maneuverability of the vacuum machine is really bad. It is very hard to make it move in a straight line, and will instead tumble from side to side. Navigating though door openings is almost unlikely to happen without the vacuum machine straying of and banging into the doorframe. Very frustrating!

Silencer Head

Even more frustrating is its inability to roll over its own cord. The is actually a slim and very long cable which give very good reach around the house. I think the front wheel is made to small. Even the tiny diameter of its own cord is enough to make it scoot the cable in front of it, and one has to lift it up to clear the cable.

Good

  • Beautiful
  • Very silent
  • Extremely powerful

Bad

  • Unstable movement
  • Bad with obstacles

The Silencer Head

The silencer head is indeed very silent. When vacuuming the machine is almost inaudible. A very pleasing experience for your ears.
The suction from the head is extremely good. This will clean everything with easy, being it hardfloors, furnitures, car seats, carpets – anything!

Silencer Head

There are some serious drawbacks on the silent head though. It is not very flexible and rotates angled on the axis of the pipe. In combination of the oval pipe, which makes it impossible to turn the handle on the pipe, it is almost impossible to twist to reach under couches, low tables and other furnitures. This brings me to another negative point. The head is unusually tall, and too high to fit under my couch at all.
The excellent suction of the head, in combination with my hardwood floor makes it necessary to flip out the brushes on the head. I don’t really like this option, and never had. After a while the brushes clutter up, and just pushes dirt around instead of letting the vacuum picking it up – especially when having pets. Not a fault of this particular head, but on the old ineffective concept itself.

Good

  • Silent
  • Very good suction

Bad

  • Tall head
  • Inflexible movement

Conclusion

It definitely seems like lot of previous learned experiences have been lost, and not transferred on to new products. Or perhaps the product responsible, disregarded other considerations and focused too much on making it ultra silent before usable.
A good vacuum cleaner, but not up to expectations :-/.

Bonus

For my previous vacuum clearer I bought this slimline head, also from Electrolux. This might be the very best head I’ve ever tried. It fits under the lowest furnitures, is highly flexible in its movement, and is on four wheels for unmatched effortless movement and easy maneuvering.

Slimline Head

I use this head instead of the original. It is lifted a bit above the ground, and therefor lets air flow disperse less optimally. This gives more suction noise and reduces suction power, but in combination with the really low noise level of the vacuum machine itself, and its abundance of power, this is a really really good combination.

Garmin Forerunner 610 and Garmin Connect on Linux

GETTING PROPRIETARY ELECTRONICS to work on Linux can be a hassle sometimes. More often that not, companies develops controller software for Windows only, or at best for Windows and Mac OS, but neglects to support the Linux platform. Then us that enjoy the freedom and wonders of Linux is often out of luck, or have to reverse engineer a solution. Fortunately a couple of hackers did just that for the Garmin Forerunner 610.

With thanks to Tigge and Dave Lotton it is possible to download files from the watch and upload them to Germin Connect.

.

Tigge have created the tools to connect to the watch and download training pass files from it. Download from github and install:

» git clone https://github.com/Tigge/openant.git
» (cd openant; sudo python setup.py install)
» git clone https://github.com/Tigge/antfs-cli.git
» (cd antfs-cli; sudo python setup.py install)

Now insert the ANT+ usb dongle, and run this command to download all training pass from the watch.

» antfs-cli

The files will end up in the directory ~/.config/antfs-cli/<id>/activities.

.

To upload the files to the Germin Connect service, install the GcpUploader made by Dave Lotton:

pip install gcpuploader

Next setup a credentials file for GcpUploader.

echo -e "\
[Credentials]\n\
username=\n\
password=" > ~/.guploadrc

Edit the file and set credentials. When setting the username your must write your e-mail address. Otherwise you will get a login failure *1 .

Finally upload all files:

~/.config/antfs-cli/3894281250/activities» gupload.py -t "running" *.fit
File: 2015-02-20_16-38-36_4_3.fit    ID: 707690585    Status: SUCCESS    Name: N/A    Type: running
File: 2015-02-24_17-46-28_4_4.fit    ID: 707690640    Status: SUCCESS    Name: N/A    Type: running
File: 2015-02-25_18-18-04_4_5.fit    ID: 707690660    Status: SUCCESS    Name: N/A    Type: running
File: 2015-02-27_17-26-12_4_6.fit    ID: 707688520    Status: EXISTS    Name: N/A    Type: N/A

As seen from the output, already uploaded files are skipped, so if not wanting to specify each file specifically, the *.fit wildcard works perfectly fine. Note that gupload.py supports other taggings than running. Run gupload.py --help for more information.

.

Side note:
For the version that I downloaded (GcpUploader-2015.2.21.3 I had to patch it to accept login with the credentials file:

--- gupload.py.orig     2015-02-28 14:03:14.223948320 +0100
+++ gupload.py  2015-02-28 16:24:35.738408614 +0100
@@ -92,7 +92,7 @@
       self.msgLogger.debug('Using credentials from command line.')
       self.username=myargs.l[0]
       self.password=myargs.l[1]
-    elif os.path.isfile(self.configCurrentDir):
+    elif os.path.isfile(configCurrentDir):
       self.msgLogger.debug('Using credentials from \'%s\'.' % configCurrentDir)
       config=ConfigParser.RawConfigParser()
       config.read(configCurrentDir)

If not wanting to venture into patching, gupload.py also accepts credentials as arguments (see gupload.py --help for more information).

.

Addendum: Dave Lotton recommends that instead of GcpUploader, one should use the tapiriik service instead…

F1 Timing App 2013

FOR THE 2013 Formula-1 season I though I would treat my self with buying the official live timing app: ‘F1 Timing App 2013’ by Soft Pauer Limited. At season start it costed around €22 – a hole lot of money of a one season app. So I had high expectations, but I quickly discovered that the app is utterly superfluous and added zero value to the Formula-1 watching experience

The app promises the following features:
★ REAL-TIME TRACK POSITIONING ★
★ FOLLOW YOUR FAVOURITE F1 DRIVER ★
★ LIVE TIMING DATA ★
★ LIVE LEADERBOARDS ★
★ DOWNLOAD RACE PACKS ★
★ LIVE TEXT COMMENTARY ★
★ EVENT COUNTER & NOTIFICATIONS ★
★ KEEP UP TO DATE ★
★ COMPLETE FORMULA ONE ACCESS ★

The last one I don’t really know what means, but otherwise it sounds awesome. In reality only the Live Text Commentary have any grain of value (it displays some additional official insights into important events happening in the races).


The prime feature of the app is the realtime visual overview of car positions on the track.
Track Overview
It sure did sound great, but unless the race evolves into a train-set of cars, the actual overview clutters due to the overlap. And when watching the race, it isn’t really an information abstraction that is needed.


The secondary high profile feature of the app is the live timing overview.
Live Timing
This would have added great value some many years ago – before the TV transmission began showing equivalent information. You don’t need a costly app for what you can already see on the TV.



All in all, a costly app that has appalling scarce value. A lesson learned, which I will not repeat

Learning How To Run

OMG, I DON’T know how to run properly :-O. That was my realization when I attended a 5 hour training course in barefoot running this weekend, hosted by Claus Rasmussen (Posemand.dk).

“What?!” you might say. “Running is a no-brainer, just get up, place one leg in front of the other in a fast pace. Everybody can do it”. Well think again! Everybody can do it alright, but most have not learned the proper way of running. Running may seem simple but there is a fine technique to it. Shortly described the proper style involves near-flat footed landing and having a tight and balanced body stature when a foot has ground contact.

To see how proper running should like, bring attention to the very best of runners, like marathon runners or sprinters. One example is Haile Gebrselassie. When observed in slow motion it is quite clear that he has a perfect style. As video documented by Claus the proper running method is actually an inherent “knowledge” in small children, i.e a very natural way of running. But then ,for almost everyone, it gets suppressed later on (via influence by imitation and footwear etc).


Proper Running

Claus’ strategy (my interpretation) for achieving better running was very simple:

  • Learn the proper technique of running.
  • Use barefoot running as a tool for (re)learning how to run.
  • Continue to run barefoot to let the body express its natural flow of running to avoid future injuries.

After some background theory we were first filmed running in our normal footwear for progress comparison later on. Then began the training for improved running. Is was a basic exercise but hard to master (basically it boils down to bringing your heels straight up, relaxing and letting your body do the landing).

Before

Before

After

After

When looking at my before shot, it can be observed (1) that I tilt a bit forward. It is clear that I land full bodyweight on my heels (3), and also my landing stand (2) is very wide, giving unbalance and energy loss. In the after shot I have bettered my vertical line, lands flat footed and have narrowed my range.

My full progress as recorded by Claus can be seen here. It looks a bit funny/weird, but it feels right. And when the technique is learnt properly, one can begin adding speed so it begins to look more natural.

Barefoot Running

BAREFOOT RUNNING HAS become quite popular, and now I have joined in on this running trend.

Last summer I saw Martin Toft running in his fivefinger shoes. They looked weird :-D but I was also quite intrigued of the concept of down to basics running. Martin seems to be quite hooked on the barefoot running concept and I’ve followed some of his blogging.

The idea of barefoot running is to nay the modern shoes, and return to the natural inherit principal of running barefoot (or with only very thin shoes). The theory is that this would make you a better and more natural runner, and reduce the risk of injuries significantly. I’ve been prone to get lots and lots of injuries during running, so the last “promise” was the sales-point for me.

So the shoes (Vivobarefoot Evo II) are bought, but first I have to unlearn old running habits, adopted from modern cushioned running shoes. An exciting and well run spring ahead I hope :-)

Vivobarefoot Evo II

Folded a Thousand Paper Cranes

I FOLDED A thousand paper cranes. Japanese legend believes the cranes will grant you a wish. The daughter of a couple of friends of mine, were up for confirmation, and so a gift of long life and good health seemed like a good idea ;-)

1000 Cranes

A few of them was made of money and mixed into the lot ]:->

1000 Cranes

I gave the cranes as a gift in a big box

1000 Cranes

Serial Port Not Working On Fedora Due To GPS Daemon (gpsd)

THE SERIAL PORT on my Fedora 15 install, mysteriously refused to be accessed. I discovered that when inserting a USB-to-Serial device, GNU screen would refuse to access the created device /dev/ttyUSB0.

» screen /dev/ttyUSB0 115200
[screen is terminating]

Since I could use screen for serial access as root, and because the newly installed Fedora did have some hiccups in adding my user (the /home/monzool directory already existed from a previous Ubuntu install), I first checked group permissions, but they seemed fine for this situation.

» ll /dev/ttyUSB0
crw-rw----. 1 root dialout 188,  0 Jun  6 08:27 /dev/ttyUSB0
 
» groups
monzool tty wheel uucp dialout tcpdump screen vboxusers

Screen didn’t offer much indication of the problem, but using strace I could see that some of the last things checked for permissions where /var/run/screen. I then removed that directory and recreated the directory setup by starting screen with sudo.

» ll /var/run/screen
drwxrwxr-x. 4 root    root    80 Jun  6 08:27 screen
» rm -rf /var/run/screen
» sudo screen /dev/ttyUSB0
» ll /var/run/screen
drwxrwxr-x. 4 root    screen    80 Jun  6 08:47 screen

This helped nothing! :-(

I then tried minicom, which was more informative about the problem

» minicom
minicom: cannot open /dev/ttyUSB0: Device or resource busy

This would mean that something else had hijacked the port. A quick check confirmed that something called gpsd was using the port.

»  lsof /dev/ttyUSB0
COMMAND PID   USER   FD   TYPE DEVICE SIZE/OFF  NODE NAME
gpsd    883 nobody    8u   CHR  188,0      0t0 11408 /dev/ttyUSB0
»  ps ax | grep gpsd
  883 ?        S<s    0:00 gpsd -n -F /var/run/gpsd.sock

Now gpsd is for handling GPS devices, but it made no sense to trigger this daemon for a simple USB-to-Serial adapter.

Knowing what was causing the hazzle, I found this bug rapport https://bugzilla.redhat.com/show_bug.cgi?id=663124. In it, it is proposed to set USBAUTO=no in /etc/sysconfig/gpsd.

» echo "USBAUTO=no" >> /etc/sysconfig/gpsd

And sure enough, this fixed the problem. The USB-to-Serial adapter could now be accessed by any serial terminal.

Copyright © All Rights Reserved · Green Hope Theme by Sivan & schiy · Proudly powered by WordPress