I finally have a good working simple example for HttpClient that works with Particle Photon. I have tested it on my particle photon and would like to document and share with everyone. I did faced one of the problem is that even when I included the HttpClient library into my experimental project it was continuously erroring pointing to glowfish library when I reckoned if glowfish was ever the dependency of httpClient library being included. So I have copied the httpClient.cpp and httpClient.h files along with my project.
take a look at the gitlab link of httpclient get example here:
In the previous company Basefx I worked in Beijing, China I got to play with socket. That was more than a year ago and I can clearly recall for sure we (the team) did not made use of systemd service or socket file path.
Maybe it was done by the system department (which is more likely because we were monitoring it with nigos web app) but the tool we had installed python scripts running socket based server to a location from where it was picked.
From last few days I had particle photon set up as tcp client to send room temperature and humidity to a tcp server running on a raspberry pi.
However, I initially picked up a node js example, since I have played alot with node js as well while I wrote web apps in vue js. And do these sort of things to refresh knowledge and not forget them.
Now I feel like doing it in python since python is one of my favourite programming toolkit. And I also need to write some more systemd service to run python code to fetch data from rest api on internet and post the data to mysql server.
Incase of my usage of sockets with particle photon I want to run tcp server and publish the temperature snd humidity data to mysql server.
What is sockets in computer terminology?
By definition: A socket is one endpoint of a two-way communication link between two programs running on the network. A socketis bound to a port number so that the TCP layer can identify the application that data is destined to be sent to. An endpoint is a combination of an IP address and a port number. So, Sockets allow communication between two different processes on the same or different or remote machines. To be more precise, it's a way to talk to other computers or devices using standard Unix file descriptors. In Unix, every I/O action is done by writing or reading a file descriptor. A file descriptor is just an integer associated with an open file and it can be a network connection, a text file, a terminal, or something else.
What is a socket file?
A socket is a special file used for inter-process communication, which enables communication between two processes. In addition to sending data, processes can send file descriptors across a Unix domain socket connection using the sendmsg() and recvmsg() system calls.
Have look at tcp server run through systemd service below.
Thanks for stopping by to read.
While learning and fixing things I use debugger alot.
Since its indispensable tool for me. I wanted to put my learning with the entire humanity .
It could get really annoying if you can’t run a multiple line code because usually it is multiple lines. To achieve that just run the below piece of can in pdb debugger in terminal.
(Pdb) !import code; code.interactive(local=vars())
That’s it and you land in a multiline python console inside of debugger.
Python has a great package, collections. In the past I used namedtuple to do mock up of Argparse. In this example I want to share how we can extend python dictionary to support dotted dictionary i.e. how we can access value of keys using dict object.key .
I decided to read about magic method again today. It is always good to revisit and refresh the learning.
They're special methods that you can define to add "magic" to your classes. They're always surrounded by double underscores (e.g. __init__ or __lt__).
What I want to focus here for magic methods that gets executed when an object is initialized from a class.
We all know the most basic magic method, __init__. It's the way that we can define the initialization behavior of an object. However, when I call x = SomeClass(), __init__ is not the first thing to get called. Actually, it's a method called __new__, which actually creates the instance, then passes any arguments at creation on to the initializer. At the other end of the object's lifespan, there's __del__. if you want to read more in detail checkout Refe Kettler's guide to python magic methods: https://rszalski.github.io/magicmethods/
Now why I wanted to talk about the magic method in relavance to those that get executed everytime a class is initialized is because I wanted to pen down some learning about Singleton pattern.
In the above code:
__new__ is a class method as well that supersedes the __init__ method (you have control on the object which gets created at this level)
_instance is a class attribute, not an instance attribute. So it's visible/available in the __new__ method before an instance is created.
So first time, the cls._instance argument is None, so __new__ creates the instance and stores the result in the _instance class attribute of cls (which is your Singleton class)
It is not None the second time because the reference of the instance has been stored, so it returns the same self._instance object. In the process, object.__new__ is only called once during the lifetime of the class.
This is the design pattern of the singleton: create once, return the same object every time.
While researching I also came across a SO link that talks about why BORG pattern is better than Singleton pattern.
https://stackoverflow.com/q/1318406/9567948 . The first answer gives the best and simplest explanation which is
If you subclass a borg, the subclass' objects have the same state as their parents classes objects, unless you explicitly override the shared state in that subclass. Each subclass of the singleton pattern has its own state and therefore will produce different objects.
Also in the singleton pattern the objects are actually the same, not just the state (even though the state is the only thing that really matters).
Furthermore, a class basically describes how you can access (read/write) the internal state of your object.In the singleton pattern you can only have a single class, i.e. all your objects will give you the same access points to the shared state. This means that if you have to provide an extended API, you will need to write a wrapper, wrapping around the singleton
In the borg pattern you are able to extend the base "borg" class, and thereby more conveniently extend the API for your taste.
Now are you with me, lets dig even more further into metaclasses
What are metaclasses?
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes, you can picture them this way:
You've seen that type lets you do something like this:
It's because the function type is in fact a metaclass. type is the metaclass Python uses to create all classes behind the scenes.
Now you wonder why the heck is it written in lowercase, and not Type?
Well, I guess it's a matter of consistency with str, the class that creates strings objects, and int the class that creates integer objects. type is just the class that creates class objects.
You see that by checking the __class__ attribute. Everything, and I mean everything, is an object in Python
Few months back I started looking into gitlab-ci at my workplace, due to part of almost my daily routine task I have now picked up the concept of gitlab-ci and I love it. Now I think of the continuous integration as best and fastest way of deployment. Yesterday I finished my particle photon weather project first stage. Since particle.io provides online IDE I in the beginning started working directly online and editing the code as well which ultimately provided a way to flash the firmware directly to Particle photon or may it be Particle Electron or mesh. So I decided to continuous integration of this particle photon project. It is basically a cpp code and particle provides its own cli tool that makes job a lot easier. Whole picture was clear to me what to do and I wrote the below .gitlab-ci.yml file that will upload code to my particle photon from anywhere no matter where the particle photon is located remotely.
off course if you have particle mcu accessible you can use serial wired upload which is faster CI is only convenient if your mcu is located remotely or if you are lazy.
In the above code I login to particle.io using cli :
- particle -q login -u $PARTICLE_USR -p $PARTICLE_PWD
I have saved PARTICLE_USR and PARTICLE_PWD variable in gitlab-ci environment settings. PWD is protected so it never gets echoed.
The above code I wrote as basic boilerplate to start with and it is working perfectly (y)
Few days back I was reading about cardinal numbers and while I was reading about cardinal number, I recall I had a discussion about infinities with a friend last year. The point was some infinities are bigger than other infinities. And how come?
If viewed from mathematical perspective we have a logical explanation . But before I go talking about which infinity is greater lets find out what is cardinality. The term cardinality refers to numbers in a set. Cardinality can be finite or infinite. However the cardinality of the set of real numbers is greater than the cardinality of the set of integers, even though both the sets are infinite.
And based on the Cantor Triangle, which demonstrates that the cardinality of the rational numbers was the same as that of the integers. Both have a cardinality of the aleph-null, the smallest infinity. Real numbers have cardinality of aleph-one.
I hope I make sense and do not hurt your heads.
I bought this particle photon in India last year, but couldn't make good use of it, and it was lying unused so I brought it with me to China thinking I will fiddle with it and do some project . It is after around 11 month of living in Beijing I managed some spare time (when my 16 month son was taking a nap) to setup Particle Photon hooked to DHT22 placed outside my home to monitor outdoor weather.
Having an experience using DHT22 with other Arduino boards I hooked it up quite easily and I made use of thingspeak.io to upload data.
You can have a look at the code I wrote for particle photon to read sensor data. Nothing new just used Adafruit_DHT.h and Thingspeak.h library
In many companies you may not be allowed to connect to docker hub registry on internet. That is most likely part of security rules. Like this is how in the company ( Base-fx) I am currently working at.
To set up a proper docker registry would mean the developers who uses images do not pull themselves from docker. Instead they request a departnment to pull the base docker image from docker hub on internet. Lets assume the systems department pulls base docker images do host docker01. This means docker01 machine has access to internet but software developers who need to use docker images can pull from docker01 on there workstations running docker or a dedicated host running docker and acting as private docker registry. In my case i am going to use docker02 host machine to host private docker registry.
Now the above steps maybe required by the devops/ system engineer who had access to host running docker that has access to internet to pull base images from docker hub.
So docker push hello-world pushes to docker . If you want to push to a local registry you need to explicitly tag it with the registry address.
If your local registry is secured, you need to run docker login docker02:5000 but that does not change the default registry. If you push or pull images without registry address in the tag , docker will always use the hub.
Now a developer can pull images as shown below.
root@docker02: docker pull docker01:5000/myworld
Download and create the devpi container
To download the image run the following command:
This may take a while if you haven't downloaded the ubuntu image before. After that is done you can create the container with autorestart enabled (needs at least docker 1.2):
Or without autorestart if you do not have a recent docker version or just don't want it to autostart all the time:
This also starts the container and it should be listening on localhost:3141. You can verify this by running docker ps to list your running containers.
Now tell pip to use the devpi server by creating the file ~/.pip/pip.conf and putting this inside:
or set the environment variable PIP_INDEX_URL either manually or via your bashrc/zshrc/...rc:
install devpi-client on virtualenv
then point devpi client to your locally installed devpi server like this
and you will see this message
So now if you run the command devpi user you will see only one user:
root is the default user that has /root/pypi is a read-only proxy link to the PyPI repo at https://pypi.org. Its purpose is to install packages when they are not available on your local devpi instance.To upload packages to the local instance, you need to create a non-root user and an index first.
now if you type devpi user you will see two user:
and you will see :
logged in 'san', credentials valid for 10.00 hours
then run the below command to create a new index for the new use
which will result in below URL:
navigate to http://127.0.0.1:3141/san/packages in browser and you will see a json result
Other Blogs & Pages
Animation & VFX SitesA MUST READ for Ani/VFX Artists
awakened by thoughts,