There were numerous posts on reddit, SO and the like which were asking for how to architect real world Haskell applications. Well, this is my go at it for an in-house testing tool which is used extensively. I would not claim myself an advanced Haskeller, some of the code is probably not idiomatic Haskell (and also some parts are really, ah well, horrible), it’s (by it’s nature) very stateful (read: imperative), BUT… it works. It even worked out better than I thought initially…
How it came to this…
I am working in the space domain and I am mostly concerned with mission control systems, especially the ones from the ESA (European Space Agency) named SCOS-2000. What we needed was a tool to be able to test certain new features implemented in the mission control system with a closed-loop test. The standard tools that come with SCOS are quite limited (and buggy) and written quite verbosely in C++. So why not try to create one in Haskell?
So it all started with a simple command-line client which could read telecommands from the MCS and simply send the correct acknowledge responses back. Since the used protocols are not common outside the space domain, they had to be implemented. That worked surprisingly well so feature on feature was added, a graphical interface appeared and multiple use cases of the tool emerged.
Since we had nearly no time and budget available it was important to get maximum output with a minimum of work. In short: I did a rapid prototyping (no TDD, very limited unit tests) which should be quickly up and running, (quite) safe, versatile and correct (as far as possible with justifiable effort) versus the specifications of the MCS. So it’s more the opposite of writing clean code for a library. Still there were some refactorings to bring in new concepts and libraries (I am very very thankful for the async
library, it removed some termination problems of threads in an instant), though of course there is still missing a lot (e.g. it could benefit from the lens library, but last time I tried it did have problems with compilation with profiling information and I didn’t have time to investigate further).
So I will write also a bit on the history of the implementation and some of the changes.
The Application
I called it the PUSSimulator (PUS = Packet Utilisation Standard from ESA). I didn’t notice the (not intended) pun until my girl friend told me: “that must be a quite sexy application…”. Well, the name stayed…
Some screenshots (sorry for the bad quality):
Just from the screenshots you can imagine, that it got quite complicated in the end (and is still developed further). Still it is very easy to use in comparison to other tools like that.
What it does…
Basically it is a server application where the MCS connects to. So it has to simulate parts of the NCTRS (Network Controller and Telemetry Router System), the ground station and some functionality of a spacecraft.There are 3 server connections: TC (=telecommands, commands from MCS to the satellite), TM (=telemetry, data from the satellite to the MCS) and an admin connection, which basically is there to simulate messages from the ground station and the NCTRS.
The main tasks are:
- Generate arbitrary telemetry to test certain functionality of the MCS. There are currently a lot of ways to do this inclusive an own DSL in Haskell itself
- Replay recorded TM from a variety of formats (from binary dumps, recording files, SCOS archive exports, some exotic ASCII dump formats extracted from MS Word files delivered from engineers etc.)
- Parse telecommands and generate suitable responses, which can be tuned to simulate failures like on-board failures, lost TM or long delays in acknowledges for deep space missions
- Perform protocol specific tasks (CLTU processing (command link transmission unit), segmentation of large packets, authentication of segments, internal handling of the COP-1 protocol machine, generation of idle packets if necessary etc)
- Decode certain telecommands and process them. In the PUS standard certain services are defined (e.g. TC acknowledges, management of the on-board TC queue, management of the on-board software for uploads and dumps etc.) which have their own TC’s and TM’s assigned. The simulator can handle some of them, this is extended on demand
- Mission specific adaptions should be as least intrusive as possible. This proved to be a quite hard point because some satellites have a quite strange design (to say it friendly) and to factor out mission specific parts is sometimes really tough.
- Deliver TM and TC statistics. This was used for performance measurements for the MCS and is especially important for commanding. As an example imagine that a low earth orbit satellite is visible to a ground station for 7 – 10 minutes typically and you have to upload a timeline of 1000 commands with an uplink rate of about 4 kbps (kilo-bit!), so you can’t afford many delays (also then imagine the number of contacts needed for a software upload, where each contact is not exactly cheap)
enzyme, guanylate cyclase through the cell membrane of buy tadalafil regular intervals with every patient receiving treatment.
. The commanding has really a lot of checks and handshakes in between, so it is notoriously slow. The simulator was used to get figures of the raw speed capabilities of the MCS and if it can fulfill requirements of higher-bandwidth missions.
The Architecture
The current architecture (it may also change in the future) arose from the needs of some kind of model-view-controller paradigm. We have the following parts that should play together:
- A user interface (not necessarily graphical) for view and control
- A server infrastructure for the 3 connections it should serve
- Some internal model of spacecraft functionalities (requires state)
- Parsers/Decoders/Serializers/De-Serializers for the data (TM, TC, configuration and other input data).
- Protocol handling (some of it is also stateful e.g. de-segmenation of large TC’s)
From the server-like requirements it is clear that we need multiple threads, which has quite some impact on the design. How do the threads communicate with each other? First we had MVars and Chans, then I decided to go with STM (which eased things a lot because of the composability) and message passing via TChan’s. The shared state is put into TVars (more of this later). The threads also determine the graphical user interface, since at the point starting the implementation the GTK binding was the only one supporting multi-threaded applications (thanks to Daniel Wagner for his tutorial on using GTK with threads). So let’s have a look on basic things first.
Interface
The interface should be decoupled from the actual model. Also in the beginning this was only a command line application, the GTK part came later. This made it necessary to have some kind of versatile abstraction mechanism to have one interface which could differ or even some of them parallel (imagine for example a GTK interface and a socket interface in parallel so that the simulator could be remote controlled for automated tests. Or, since some of the automated tests for the MCS are in Python also a python interface could be nice
. Hm, food for thought…)
Anyway, this can come handy, currently we only have command line (which is now bit-rotted) and the GTK one. So to decouple the model from the view/controller we have a simple data structure:
1 2 3 4 | data Interface = Interface { ifActionTable :: ActionTable, ifRaiseEvent :: Event -> IO () } |
The two field represent the two directions:
- the ActionTable is actually for the direction user interface → model, so the controller part. All interfaces just use the ActionTable for calling functionality in the simulator itself. It looks like that (only an excerpt, the real one is much bigger):
1 2 3 4 5 6 7 8 9 10 11
data ActionTable = ActionTable { actionEnablePIDs :: Interface -> [Word8] -> Bool -> IO (), actionLoadImageFile :: Int -> String -> Interface -> IO (Either String ()), actionDumpMemory :: Int -> IO (), actionStartTMSimulation :: Interface -> [Action] -> IO (Async ()), actionToggleTCRandomizing :: Bool -> IO (), actionSetTMStreamType :: NcduTmStreamType -> IO (), actionToggleTCAuthentication :: Bool -> IO (), actionEnableTM_1_1 :: Bool -> IO (), ... }
ifRaiseEvent
is a function for the other direction model →view and is called from the simulation part if the interface should be notified of something and has to be provided from the interface. It takes anEvent
which may look like this (just a sample):1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
data Event = EV_TM_Socket_Accepting | EV_TM_Socket_Connected String | TM_Interpreter_Error String | TM_Interpreter_OK | EV_TC_Received | EV_TC_Frame_Decoded | EV_TC_Segment_Decoded | EV_TC_Packet_Internal_Processing_Started | EV_TC_Decode_Error String | EV_TC_Display_FARM_State FARMState | EV_TC_Display_CLCW CLCW | EV_TC_AUTH_Failed String ByteString | EV_TC_AUTH_Succeeded ByteString | EV_TC_AUTH_Error String ... deriving Show
Since events are added quite frequently, ghc can check if all of them are handled, which is nice to not forget one. The interface now must provide a function that does something with these events.
There is also a smart constructor and a helper for calling the ifRaiseEvent
function:
1 2 3 4 5 6 7 8 | createInterface :: ActionTable -> (Event -> IO()) -> Interface createInterface table evR = Interface table evR callInterface :: Interface -> (ActionTable -> t) -> t callInterface interface f = do f (ifActionTable interface) |
The helper is just to type less, so e.g. the GTK interface can do something like:
1 2 3 4 5 6 7 8 9 10 11 12 13 | module Interface.Gtk.TMSettings ... setupCallbacks :: TMSettingsGUI -> Interface -> IO () setupCallbacks g interface = do -- get the toggle button let sendTM1 = tmsSendTM1 g -- register the callback on sendTM1 toggled $ do val <- toggleButtonGetActive sendTM1 callInterface interface actionEnableTM_1_1 val |
And events can be raised (unsurprisingly) like this:
1 | ifRaiseEvent interface (EV_TC_Packet_Received pkt p) |
An important restriction is that because of the message-passing based approach actions to the model, which are forwarded to a different thread via STM TChan’s can’t have a return value, instead the notification is done asynchronously via the ifRaiseEvent
function e.g. like this:
1 2 3 4 5 6 7 8 9 10 11 12 | module Interface.GtkInterface ... raiseEvent :: GUI -> Event -> IO () raiseEvent g (EV_TM_Socket_Connected hostname) = do logStr lAreaAlways $ "[IF]: TM Socket connected to: " ++ hostname setEntryConnState (guiEntryTmConnected g) CONNECTED -- many more pattern matches to follow ... |
And now to bring it all back together, the main function of the application is like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | main :: IO () main = withSocketsDo $ do np <- getNumProcessors setNumCapabilities np (conf, _) <- (getArgs >>= parseOpts) -- parse the configuration and command line options ... -- create the channels channels <- initChannels let actionTable = ActionTable { actionEnablePIDs = enablePIDs, actionStartTMSimulation = startTMSimulation (cfgPUS config) (chanPkt channels), actionEnableTM_1_1 = enableTM1_1 (chanSim channels), ... } -- initialise the GTK interface. Passes in the actionTable from the simulator, -- the configuration and the created channels and returns the created interface interface <- initGtkInterface (cfgPUS config) actionTable channels -- setup the threads (more later)... ... -- start the GTK main look gtkInterfaceMainLoop interface putStrLn "Exiting." |
The initGtkInterface
function initialises GTK, loads the GUI data in Glade GtkBuilder format, initialises it’s components, calls the (for this section important) createInterface
function with the action table provided and it’s internal raiseEvent
function and returns the interface for the rest of the application.
There is also an initCmdLineInterface
function with accompanying cmdLineMainLoop
function, but it’s currently not in use and commented out.
Still it’s quite easy with this to change to a new interface. Currently there is no possiblity to run multiple interfaces in parallel, but it is fairly easy to do this and will be an extension in the future.
Also, interfaces could be factored into plugins and dynamically loaded at runtime if needed, so there is a lot room for improvement.
That’s it for the interface decoupling, nothing really fancy here. I saw a post from Gabriel Gonzalez about MVC, but didn’t have time to look at it.
State
When thinking of Haskell and State of course one of the first things that pops up is the State Monad. It can be used for stateful computations in a thread, but of course it is not suited for state shared between threads (as each thread would practically run a copy of the state, the states would diverge very quickly). So the central application state, which must be shared between threads is done via STM’s TVars.
Also, most functions in the simulator need the configuration values. The configuration itself is a simple normal Haskell data structure, which automatically derives Read and Show instances. Because it would have taken more time, I did not implement a config file in XML or in INI format, but just use the Read and Show instances of the datatype, so that loading is just reading the file and applying the read function to the content. Also a lot of other data structures inside the simulator have Read and Show instances, which is used to view them or store/reload them to/from file.
But back to the configuration, since it is needed nearly everywhere, a Reader Monad comes in handy, which basically carries some sort of “environment” with it, which can be “asked” for. So I ended up defining the central monad of the simulator as:
1 | type Simulation a = ReaderT SimulationState IO a |
which is a Reader Monad Transformer with the SimulationState
stacked over the IO monad (as we need to read/write sockets and files). The SimulationState itself is:
1 2 3 4 5 6 7 8 9 | data SimulationState = SimulationState { _config :: PUSConfig, _simStateTVar :: TVar GlobalState } data GlobalState = GlobalState { _tcState :: TCProcessorState, _tmState :: TMProcessorState } |
So with the ReaderT
we have read-only access to the components, which are the configuration and a TVar which holds the shared state (GlobalState
). The shared state itself consists of two data structures, one for the TC processor and one for the TM processor. These are themselves bigger data structures which have lots of fields (an excerpt):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | data TMProcessorState = TMProcessorState { tmpSendIdlePkts :: !Bool, tmpMCFC :: !Word8, tmpVCFC :: !Word8, tmpPusSSC :: SSCCounter, tmpLastPktStat :: !(TimeVal,Word32), ... } data TCProcessorState = TCPState { tcpFARMState :: TVar FARMState, tcpSegmentState :: TVar SegState, tcpTCRandomizerEnable :: !Bool, tcpTCAuthenticationEnable :: !Bool, tcpLAC :: !LacCounters, tcpFixedKey :: !Key, tcpProgrammableKey :: !Key, tcpKeyUsed :: !AuthKeyType, ... } |
These are the different fields needed, e.g. a bool which tells if the simulator should send idle packets or not, the Master Channel Frame Count and Virtual Channel Frame Count as required by the PUS on the transfer frame level, the Source Sequence Counter on the packet level, which is internally a HashMap from a Word16 (the application ID of the packet) to a counter value and so on.
For the TC State there are further structures like the FARMState
(an internal state needed for the FARM-1 – the receiver – part of the COP-1 protocol on the transfer frame level) and the SegState
which is needed for packing a segmented packet back together (one level higher than transfer frame level in the protocol) and also stuff like the keys needed for the authentication etc.
So as we have a lot of nested state, I think lenses could have helped me, but for some reason I had problems with the library back then and didn’t have time to look at it now, so most accessor functions I wrote by hand.
Threads
So what about the thread structure? From the requirements of what it should do, there are already some threads necessary:
- A GUI thread. Since we use GTK, which requires to run it’s main loop in the main thread, this thread is already fixed
- A TC thread which reads incoming commands. Since this is a server application, we need a listener thread which on a successful accept of a TC connection from the MCS spawns the reader thread. This is fairly normal server socket stuff. The reader needs to process the different protocol layers until it can make sense of the command and forward it then to further processing components. As we also simulate parts of the ground station and network routing which send back the first acknowledges, it also has to do this
- A TM thread which is started on a successful accept of a TM connection from the MCS and simply listens on a TChan for packets it should encode and send to the MCS.
- An admin thread which is started on a successful accept of an admin connection from the MCS and simply listens on a TChan for admin messages it should encode and send
- The SimState thread. It’s main purpose is to update the shared state of the simulator (changing settings, coordinating some tasks), which provides the main access point for the interface.
- A TM simulation thread which is started on demand and simply puts TM packets into the channel for the TM sending thread. This is used for TM replays and for TM generated via the DSL.
- Some components also need internal threads. E.g. the on-board queue component has a timer to be able to execute received commands at a certain timestamp.
Looks good? Well, in theory this is enough, in practice, we need more. Most of the following issues appeared during the implementation.
GUI Thread
This is the most straight forward thread. GTK just requires that the mainLoop runs in the main thread, which is simple. GTK callbacks are also run in this thread, but calls from the other threads via ifRaiseEvent
function shown above of course runs in a different thread, so all GTK calls need to be wrapped in postGUIAsync
and postGUISync
calls. Its just something to be aware of, nothing complicated.
TC Thread
The basic (pseudo-code) version of the TC thread was something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | processTCCore :: Socket -> Interface -> Simulation () processTCCore sock interface = do -- read the command from the socket tc <- receiveTC sock -- send the first responses that we have received the command sendNCTRSResponses sock tc -- start the protocol handling process tc ... -- and loop over again processTCCore sock interface |
While this looks simple and nice, this is quite impractical. For example, we have no possibility of terminating the thread from the outside besides killing it. The thread will most of the time be blocked in the call to receiveTC
and doesn’t react to anything other. If we kill the thread, this would leave some resources possibly stale (not shown in the pseudo code above) until they are garbage collected and also this is not quite an ordered termination of the machinery.
Well, in C we would possibly use the select()
system call for this with an additional pipe as input, in C++ maybe some Reactor Pattern (like from the POCO or ACE libraries). We don’t have this in Haskell (well, we could probably use the IO manager from ghc), but the Haskell solution seems to be to use more threads because they are very cheap.
The idea behind this is to have a Socket Reader thread which blocks in receiveTC and forwards the read command per channel to the processing thread. Through this channel also a termination request can be sent, so that the processing thread can cleanup, terminate the Socket Reader and exit. And while we are at it, the writing of responses back to the socket may also be helpful if it can be triggered from outside the TC thread. Also there should be the possibility to delay responses, but still keep them in the right order, so we add another Socket Writer thread which does this job. The final solution looks somewhat like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 | -- send a response to the MCS from outside the TC processing thread writeTCResp :: TCWriteChannel -> TCResponse -> Simulation () writeTCResp chan resp = liftIO $ atomically $ writeTChan chan resp -- we come here after the successful accept from the server socket startTCProcessing :: Socket -> Channels -> Simulation () startTCProcessing handle channels = do -- some preliminary work race_ (readSocket handle (chanTC channels) (chanWriteTC channels)) (writeSocket handle (chanWriteTC channels)) liftIO $ sClose handle return () -- This thread does the reading on the socket readSocket :: Socket -> TCChannel -> TCWriteChannel -> Simulation () readSocket handle tcChan tcWriteChan = do action handle tcChan where action handle tcChan = do res <- liftIO $ receiveLoop handle (handler tcChan) return () where handler tcChan pkt = liftIO $ atomically $ writeTChan tcChan $ TCDU pkt -- This thread does the write on the socket writeSocket :: Socket -> TCWriteChannel -> Simulation () writeSocket handle tcChan = do action handle tcChan where action :: Socket -> TCWriteChannel -> Simulation () action handle tcChan = do resp <- liftIO $ atomically $ readTChan tcChan case resp of TCNCTRSResp r -> do liftIO $ sendTC handle r action handle tcChan TCDelayedNCTRSResp r delay -> do async $ liftIO $ do let tm = (fromIntegral (timeToMicro delay)) threadDelay tm sendTC handle r action handle tcChan TerminateWrite -> do return () processTCCore :: Channels -> Interface -> Simulation () processTCCore channels interface = do -- some intialisation and preparation loop cfg defaultCLCW True perfHandle where loop :: PUSConfig -> CLCW -> Bool -> PerfLog -> Simulation () loop _ _ False perfHandle = do writeTCResp tcWriteChan TerminateWrite liftIO $ closePerfLog perfHandle return () loop cfg oldClcw True perfHandle = do res <- liftIO $ atomically $ readTChan tcChan (clcw, continue) <- case res of TCDU pkt -> do -- notify the interface that we got a TC liftIO $ ifRaiseEvent interface (EV_TC_Received) -- send now the NCTRS responses for 1st and 2nd level UV _ <- sendNctrsResponses tcWriteChan pkt oldClcw -- start the decoding and protocol handling Terminate -> do liftIO $ do logStr lAreaAlways "Received Terminate request, terminating TC handler..." atomically $ writeTChan tcWriteChan TerminateWrite return (oldClcw, False) ResetStats -> do resetTCStatistics interface return (oldClcw, True) loop cfg clcw continue perfHandle |
I am sure there are better ways to write this, but at least it works. The startTCProcessing
starts the Socket Reader – and Writer – Threads with the excellent async library from Simon Marlow (resp. the lifted-async since we are in a Monad Transformer). The race
function is really nice for this, it starts the two threads and if one terminates, it automatically shuts down the other. So the termination sequence would be:
- The core processing thread would get a
Terminate
request via the channel - It sends this request to the Socket Writer Thread
- The Socket Writer Thread terminates
- The
race
function ensures, that the Socket Reader also terminates
So we can now send further messages to the TC processing from the outside, in the example above it can also get a ResetStats
message which resets the statistics about received TC’s.
The receiveLoop
is a rewrite of the receiveTC
function with conduits which was done recently.
TM and Admin Thread
Remember, that the basic implementation of the TM thread would be something like this:
1 2 3 4 5 6 7 8 9 10 11 | processTMSocket :: Socket -> Channels -> Interface -> Simulation () processTMSocket handle channels interface = do -- listen on the TM channel for messages (packets to send, termination requests,..) msg <- liftIO $ readTMPacketChannel pktChan -- if it is a packet, encode it and send it to the socket -- if we should terminate, simply don't enter the loop again -- otherwise loop over processTMSocket handle channels interface |
So we already have the possibility of termination via a channel message. But the TM thread has another problem: in the upper right of the simulator there are connection indicators, which show that the MCS is connected. It is important to know, when the MCS disconnects for some reason as soon as possible. With the current implementation this is not always possible.
Since the TM (and Admin) thread spend most of the time in the blocking wait on the channel for a message, if the MCS disconnects in the meantime, the thread can only detect this on the next write to the socket, which would return an error. This could be potentially in a few hours or even never
. So how to get a reliable connection indicator?
Fortunately, this is quite easy in this case. The TM and Admin threads never read from the socket, they just write (and the MCS never sends something on this socket). So the idea is to start a simple checker thread, which only hangs in a blocking read
on the socket. If the read returns the condition of the disconnection, it can terminate the TM thread. We again utilise the race
function from async
in this way:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | processTMSocket :: Socket -> Channels -> Interface -> Simulation () processTMSocket handle channels interface = do -- some initialisation and preliminary work -- start the watchdog thread for the socket liftIO $ logStr tmArea "Starting processing loop..." race_ (checkerThread handle interface) (loop handle perfHandle True) -- if we come here, we should terminate so perform some cleanup liftIO $ do sClose handle closeTMPerfLog perfHandle liftIO $ logStr tmArea "TM processing loop terminated." where loop :: Socket -> TMPerfLog -> Bool -> Simulation() loop _ _ False = do liftIO $ do -- terminate the simulation thread and the processing ifRaiseEvent interface TM_Processing_Terminated logStr lAreaAlways "TM Processing terminated" return () loop handle perfHandle True = do msg <- liftIO $ readTMPacketChannel pktChan case msg of Nothing -> do -- we have a timeout, so we send an idle frame state <- getTMState when(tmpSendIdlePkts state) $ encodeIdlePacketAndSend handle clcwChan interface perfHandle loop handle perfHandle True Just msg -> do case msg of TMPkt pkt -> do encodePacketAndSend handle clcwChan interface perfHandle pkt loop handle perfHandle True TMResetStatistics -> do restartStats interface loop handle perfHandle True TMTerminate -> do liftIO $ logStr lAreaAlways $ "TM: got termination request" loop handle perfHandle False checkerThread :: Socket -> Interface -> Simulation () checkerThread sock interface = do cont <- liftIO $ do logStr tmArea "TM Connection checker thread started..." recv sock 1 -- on connection loss an empty bytestring is returned if B.length cont == 0 then do liftIO $ do logStr lAreaAlways "TM Client terminated!" ifRaiseEvent interface EV_TM_Socket_Disconnected else do checkerThread sock interface |
The race
function ensures, that if either of the two threads terminates, the other is also terminated and this without additional effort. I love the async library…
The TM thread has another speciality: if in a time interval no packet is sent, a so called idle packet should be generated (some kind of keep-alive message). I got a solution from StackOverflow and ended up with this implementation:
1 2 3 4 5 6 7 8 9 10 11 | -- block (retry) until the delay TVar is set to True fini :: TVar Bool -> STM () fini = check <=< readTVar -- Read the next value from a TChan or timeout readTMPacketChannel :: TMPacketChannel -> IO (Maybe TMMessage) readTMPacketChannel chan = do delay <- registerDelay 1020000 atomically $ Just <$> readTChan (pktChannel chan) <|> pure Nothing <* fini delay |
STM provides the registerDelay
and check
functions for this purpose. Through the SO answer I also learned more about uses for Applicative and Alternative. Much to learn you still have my young padawan, I have to say to myself…
TM Simulation Thread
The TM simulation thread is quite simple, it gets a list of actions to execute and does this. From the GUI the user can start, stop and single step the actions.
Channels
The channels used are all STM TChan’s. Depending on the task, each main-processing thread has a single channel it listens on. Since potentially all threads can make use of each other, I decided to put all channels together in a single data structure and pass this to the various threads.
1 2 3 4 5 6 7 8 | data Channels = Channels { chanCLCW :: CLCWChan, chanPkt :: TMPacketChannel, chanAdmin :: AdminChannel, chanTC :: TCChannel, chanWriteTC :: TCWriteChannel, chanSim :: SimulationChannel } |
Most of the channels are simple type synonyms to a TChan. This data structure is just for convenience (it is used in most examples shown above) and also new channels of new components can be added more easily than passing the single channels as parameters.
Conclusion
Ok, this was a rather lengthy post, so I shut up now. I hope somebody finds this useful and/or interesting. Questions and Comments are welcome. Also if specific questions arise, this may justify a follow-up post.
Cheers,
Michael
Pingback: Architecture of a Real World Haskell Applicatio...
Lol, Puss Simulator
I’d highly recommend changing the name. There is a lot of valid concern for creating an inclusive environment for women in the software industry (and life in general), I don’t think that’s the best name if you want to create an inclusive environment for women. Right?
Yes, I also thought about this. This really is no problem in the space domain, I know a lot of spacecraft operators (quite a few which are women), and no one was offended by this, of course because they see first PUS as the acronym which it is, it’s a standard in the space domain. But you are right, when going outside the domain, a renaming would be desireable.
I have thinking about learning Haskell and thought of doing a CPE simulator for TR-069 to test ACS (the standard server).
So your post is quite timely!
What would estimate in time savings (I am assuming using Haskell saved time) compared to having to do this in C ?
Ignoring the GUI code do you have a breakdown of the total number of lines of Haskell code by type …comments, white space , etc?
Do you plan on posting the project?
Last year I had a presentation at the European Space Operations Center, so the numbers I have are about a year old. Also the values are directly taken from sloccount tool from David Wheeler, which does not do a breakdown into comments and so on (could you recommend a tool that does?). So there is a tool coming with SCOS which is similar, but has less functionality and is quite user unfriendly. Also it is hard to give a fair comparison as the C++ tool uses SCOS libraries, which had to be rewritten for the Haskell version and it is hard to find out which code exactly was used. So the C++ LoC was calculated as the C++ source code of the tool itself and ONLY the header files from the SCOS libraries used, so the real line count is higher, still Haskell beats that by magnitude while still having more functionality:
LoC: C++: 23.542+ Haskell: 8.997
My experience was that the productivity in Haskell in average is about 1.5 to 4 times that of C++. In the beginning, I was also learning Haskell, so it was slightly worse to C++, which I know quite well. Now I am quite always (with a few exceptions concerning topics in Haskell I have to learn from scratch) between 2 and 4 times, so the time is about 50% to 25% compared to C++. I can send you the poster from the ESOC presentation if you want PM.
Unfortunately, I can’t post the project as it contains spacecraft specific functionality which is confidential. The post concerns general parts, which could be in any other application too. I thought about writing a follow up to the implementation of the protocols (which are anyway part of a public standard) if I have time.
I hope you have the time for your follow up article …it could help others.
Pingback: Articles for 2014-May-12 | Readings for a day
Thanks for sharing this. It is quite interesting to me as I also apply Haskell in a satellite operations contexts, and am happy to hear that I am not alone! I would be quite interested in seeing the poster from your ESOC presentation, if you don’t mind sending it. Are you aware of any other people/teams using Haskell in the space domain?
Ah, I also thought I am alone… I am not aware of other people using Haskell in the space domain. Anyway, I send you the poster. It is a year old and the application has evolved quite significantly in the meantime. Also another blog post is in preparation.
can i contact you i need to know about some field requirement about ur communication and details about the interfacing please
Yes of course. You can ask here or michaeloswald5 gmail.com.
I am very interested to hear about Haskell in the space domain- please post more!
I don’t do operations, but I do use Haskell for little tools for my own use as an embedded system developer for space systems at NASA.
For general tools I use C or C++, since that is what people here know, so its really interesting to hear about Haskell being used for substantial work.
Pingback: Architecture of a Real World Haskell Application part II | Random Thoughts
Thanks a lot for this intresting post!
Pingback: Binding to a C++ CORBA interface in Haskell | Random Thoughts