Tables in scientific papers often look less than professional, and
sometimes this can even get in the way of understanding the message. In this
blog post we will use
pandas to automate making
publication ready LaTeX tables that look great.
The process of setting up a new LaTeX project is made up of many manual steps, resulting in a patchwork that already from the start is not exercisable nor complete. In this post we will see how we can construct a solid starting point with a single command. This is part of a series to create the perfect open science
The rejection rate for papers in good conferences is very high. To be accepted, a paper must not only be of a high scientific quality, but also at first impression perceived to be - or risk being thrown in the recycling bin. In this post we construct a system that automatically optimizes one proxy metric for perceived quality, removing one small frustrating step of scientific paper authorship and hopefully avoiding the bin.
When submitting a scientific paper to a conference or a journal, there is often a mandatory step of passing the automated PDF checks set up by that publication. This step can often be nerve-racking and cause many hours of LaTeX troubleshooting. In this post we will create a series of test cases to catch these problems early in the writing process so that you can submit your manuscript only once.
The process of writing a LaTeX document can be one full of manual steps, resulting in a patchwork document that is not exercisable nor complete. This makes it impossible to reproduce the document from code and data. In this post we will create a pipeline for compiling a LaTeX document that works both locally and using GitLab CI. This is part of a series to create the perfect open science
In academic writing with LaTeX there are a lot of things that can be frustrating to the author. For many of these things there exists many packages that can help alleviate this frustration, but it is hard to find them. In this post I list 10 of my favorite packages to help remove some of this frustation, and make your papers look nicer so that you have a higher probability of getting your paper accepted. Hopefully.
Researchers have called out for more transparency from The Public Health Agency of Sweden regarding the COVID-19 estimates for Sweden. Recently, a report has been released covering such estimates for the Stockholm region. Along the report, the code used for these estimates was uploaded to Github, which makes it possible for others to review and critique the work. In this post we will take a look at the reproducibility aspects of this release. We find that it is possible to some extent reproduce the figures in the report, and we suggest many improvements to the repository.
Machine Learning (ML) is an important enabler for optimizing, securing and managing mobile networks. This leads to increased collection and processing of data from network functions, which in turn may increase threats to sensitive end-user information. Consequently, mechanisms to reduce threats to end-user privacy are needed to take full advantage of ML. We seamlessly integrate Federated Learning (FL) into the 3GPP 5G Network Data Analytics (NWDA) architecture, and add a Multi-Party Computation (MPC) protocol for protecting the confidentiality of local updates. We evaluate the protocol and find that it has much lower overhead than previous work, without affecting ML performance.
Figures in scientific papers often look less than professional, and
sometimes this can even get in the way of understanding the figure. In this
blog post we show how to use
tikzplotlib to make
publication ready figures that look great and can be styled from the document
preamble. Beautiful and understandable figures can possibly lead to higher publication acceptance rate, at least I hope so…
In the future, different intelligent systems will need to share data and experiences with each other to become good enough for certain tasks. RISE Industrial PhD student Martin Isaksson’s research is an important step on the way. This area is highly relevant for Ericsson and the development of its 5G but lessons learned along the way might open for many more solutions.
There is a rapid evolution in telecommunication with denser networks and systems operating on an increasing number of frequency bands. Also in the next generation 5G networks, even further densification is needed to be able to reach the tight requirements, implying more nodes on each carrier. Denser networks and more frequencies makes it challenging to ensure the best possible cell and frequency carrier assignment to a User Equipment (UE), without the UE needing to perform an excessive amount of inter-frequency measurements and reporting. In this paper, we propose a procedure of predicting the strongest cell of a secondary carrier, and the procedure is exemplified in a UE load-balancing use case. The prediction is based on only measurements on the primary carrier cells, avoiding costly inter-frequency measurements. Simulations of a realistic network deployment show that a UE selection based on the proposed secondary carrier prediction is significantly better than a random UE selection for load balancing.
Sometimes, when I come across a Powerpoint presentation with multiple multi-slice pie charts on a single slide my head hurts and I have to go air-poop for half-an-hour. Now I have decided to take action!
Using a PocketLab put on the top of a washing machine, we measure the magnetic field generated by the motor. The magnitude of this vector is used to detect if the motor is on, and when the motor has been off for some time, we say that the cycle is finished.