Problem:
Our client needed valuation of securitized products is a critical part of their business. The modeling calculations of securitized products was slow and unreliable while it was consuming CPU capacity.
Solution:
First we had to overview of the back end system and make an architect analysis. We elaborated a solution and based on our suggestion we had to implement and maintain the daily valuation of securitized products by running modeling calculations on a distributed grid with about 1000 computers. We have made several optimizations to the runtime to allow better usage of CPU capacity. Debug and update the legacy DHT implementation and perl runtime.
Technologies:
C++, Perl
Problem:
Our client had a manual CI/CD process executed on every developer machine. The problem was the long compilation times and some of the tools had a lot of manual steps also some tools required a license which was shared amongst the team, which created synchronisation problems. They were using an older type of version control, which was very slow.
Solution:
It was a requirement, that we keep the incremental build and the ability to manually modify the source tree, without checking it in into version control. Our solution was to build them a build farm, where the compilation can run on a highly parallel bare-bone, which also solved the problem of synchronising the license.
We created a custom script for them to access all the Jenkins functionality through their make commands, this allowing them to continue using the command line instead of initiating/cancelling a build on the Jenkins page.
We created a version control cache on the build farm, to fasten up checkout times and we have built a custom script, which checked their local directory for differences compared to their base version, stored in the repository and automatically synched those items into their build directories on the build farm. We have also made it possible to deliver the artifacts, made by the build process, back to their developer machine.
Problem:
Our client needed a 3rd party independent certificate of their embedded product if it satisfied a set of security and functional requirements. They have their source code in C. Our approach was to verify the hardware layout and understand the source code of their software.
Solution:
We checked the available interfaces of the device, their connection to the microcontroller and the attack vectors they provide. We have also static analysed the source tree to uncover coupling between functions, we have code reviewed 100% of the source code and built a pseudo functionality for every C function. From the static analysis and pseudo workflow of the program, we could ensure that the requirements were fulfilled with the required functionality from the code perspective and that there was no additional (backdoor) functionality provided.
Technologies:
7/24 customer support and network surveillance on 1st and 2dn level in English and in Hungarian; YouTrack; Nagios; Custom monitoring platform
Problem:
Our client wanted to build a customer complaint website. The requirements were to make it user friendly and fully domain specific. It had to run on the company's Azure platform and use it for authentication as well. It also had to provide an API for external tools.
Solution:
Our solution was to make a wrapper for a customer complaint handling tool, which allowed us to fully customise the user interface and the API. We chose YouTrack as the issue tracking tool and built a REST API around it. We built a custom plug-in to authenticate with the company's Azure cloud.
Technologies:
Tas tracker system development on Server site; k8s; Docker; C++; YouTrack, Azure; Web admin portal; Angular 10
Problem:
Our client has a hardware/software solution which consists of multiple applications on an industrial box with multiple SOCs in it. It is required to be highly available and physical access to their shipped devices are very limited. The devices are connected to the internet.
The problem was that their CI/CD solution were completely manual, the binaries were built on developer machines in an uncontrolled environment. Also, the delivery was completely manual by secure copying binaries to the devices through the internet.
Solution:
Our solution went two ways: first, to create a repository which consists of the build tools, all the external source code and library dependencies; second, to create such a deployment, which allows secure and stable delivery of binary images and making sure the devices are functional - even after a failed deployment.
For the build, we have changed their repository structure, scripts and Makefiles to allow us to build in a controlled environment and we have dockerised it. We also built a build farm using TeamCity, it was also able to deliver the artifacts back to the developers own devices. We changed how the artifacts were delivered from a zip of binaries to pkg packages and delivered them to a Nexus package repository, to allow us to use standard installation tools to bring the binaries and all of their dependencies.
For the deployment, we first updated the applications to allow running them, without root privileges. We changed them to run as a systemd service and connected the external watchdog to the linux instead of the applications. We changed the partitioning of the device to contain a safe boot partition with a minimal linux installation, to allow complete rewriting of the running partition even in case of an update failure. This also allows us to update the operating system itself in a safe way.
We mounted the system partition as 'read only' and changed the applications to write all their data into a controlled space on the device. We created a custom management website, where our client can provision the devices with a single click. We wrote Ansible playbooks to configure the devices, and the initiation of these scripts were automated from the webpage.
Technologies:
CI/CD and live deployment processes; TeamCity, k8s, Docker, Ansible, Nexus, Cmake, Custom Python scripts
Problem:
Our client had an old, heavily manual CI/CD process on a fairly large code-base.
Their requirement, was to modernize and automate the all the low hanging fruits. Their software is mostly built from C sources using CMake and scripts, compiled for a large number of linux platforms, with parts of it compiled on Windows.
Our approach was to make their artifacts reproducible, removing the human factor. They required us to use TeamCity as the CI/CD tool.
Solution:
We collected their tooling and created a repository for it, which allows us to compile older versions with their required older toolset, just by versioning it in git and using the tools as a sub-repository in the project.
We dockerised their build environment and installed a build farm on a bare-bone. In TeamCity, we set up the steps of the compilation, by modifying their Makefiles to accept the build parameters. We have also introduced their testing as a quality gate in the build process. In TeamCity, by default, it is not straightforward to trend quality metrics across multiple platform builds.
We have created a custom script to collect test results and warnings during compilation, which allows us to customize how we trend those metrics.
Technologies:
CI/CD processes; TeamCity; Gitlab; Automake/make, dpkg, Docker, Custom Python scripts
Problem:
Out client is a leading European manufacturer of the automotive industry. We were their principal embedded software provider as their initial investigation had shown that there were no off-the-shelf solutions meeting their requirements. They had a hardware with limited ROM and RAM capacity which was not enough for the size of the ECU (engine control unit).
Solution:
We developed the ECU (engine control unite) and the bootloader to OTE (over-the-air) update. forráskód optimalizáció sebességre, memória használatra és méretre.
We analysed each and every object code, we also worked on the optimization of the size of the new codes. We provided an audit trail of any changes made to the code, with systematic code reviews. We also optimized the easy and correct translation options to have more optimal binary coded files.
The change control was for functionally equivalent codes and as a result of it we could optimize the size of the software by 36%.
Problem:
Our client is running an online game platform (poker) and wanted to analyse the behaviour of the players with the existing huge amount of data. It was a complex project in which we worked on grid computing and data analysis.
Solution:
We worked on creating a software to provide better data delivery of huge datasets and simulate situations which require high computing power. It took machine learning Artificial Intelligence (AI) and put it in an embedded device that provides models based on 5-10 millions card games.
We created an interface where we issued a big data analysis based method that enables us to have performance analysis solutions by players. It was a Lean-Agile development approach over several sprints, allowing for frequent iterations of software that could adapt to the evolving AI development.
Problem:
The client is working in consumer electronics, developing laser tech games. Our role was to bring software and AI to an embedded system for laser tech games. The software had to run on desktop as well as on raspberry pi.
Solution:
Working closely with the client’s engineering team, we implemented an embedded solution. This was a pic microcontroller, with a Bootloader, plus a postmortem debugging feature.
The device consists of an embedded board running on a desktop software that manages the overall operation and maintenance, which was developed in Cube and C++.
We also added a digital signage network.based on raspberry pi.
Problem:
Our client is an international company manufacturing diagnostic devices. They were working to disrupt the healthcare testing process to be able to sell it not only in Europe (they had ISO certification for Europe), but also in the USA. In the USA the certification is issued by FDA (U.S. Food and Drug Administration).
In order to have the FDA approval our role was to provide support with integration and management, mirroring the existing software development process and ensuring quality across both product functionality and adherence to regulatory requirements.
Solution:
process management, verification and validation testing and firmware development work from zero. We also worked on several thik pre-written specification documents and delivered FDA-compliant documentation at the end of each test run (approx 5000 tests). Our support enabled the client to gain FDA approval.