Hope that helps, it was fun and easy to setup...
Defining your own domain name is simple enough using CoreDNS. At home I have 2 android phones with ubuntu installed, one of them running coredns as backup DNS server and nodeexporter. I have another vm and my macbook. on VM I have coredns running with the following zone files. that lets me access the phones and vm via domain name mapped to their IP address.
and then in my CoreDNS config Corefile I access the zone like this:
Where 192.168.0.15 is my dnsdist upstream DNS server listening on 53, In the forward section it forward all request to upsteam DNS server except for test1-api.tt.testing .
and in the shell I ran nslookup with coredns runing on port 8533, so I used -port=8533 to get result. now you can add this entry of DNS server to /etc/resov.conf or PiHole DNS server as 192.168.0.15#8533 to get result without having to pass -port=8533 but keep in mind then you are supposed to use that node ip address as dns server. rather than one having 8533 port address dns.
Hope that helps, it was fun and easy to setup...
I set up a cluster of DNS servers at home, next step is to use dnsdist along with consul to load balance .
So this is one of my amazing learning of the day. I have been using coredns dns server at home home network to which all devices connect, beside just caching and speeding up opening website faster on home network. I also host my website from home. which while navigating I wanted to domain name should resolve to my local host ip that is serving the website. So here is the solution I figured out and it works.
Use the below Corefile for coredns
In the above corefile , I am using hosts plugin of CoreDNS, that allows me to mimi the systems /etc/hosts file even within configuration. I have defined self-hosted.drydns.com pointing to local server IP address which is hosting my website. And for other websites it is forwared to cloudeflare dns server.
And here are the results:
When I am at home network. (domain name resolve directly in a single hop.)
In the above output of nslookup 192.168.0.75 is my CoreDNS Server I have setup in my router.
And take a look below when I am outside , connected to a public wifi or via phone hotspot.
Now nslookup the same domain name I got it resolved to a external ip of the server. 220.127.116.11 is cloudfare DNS server ip.
sudo netstat -ltnp | grep 53
Where 53 is the port number, so if I run this on my linux box I get the following.
I wanted to put together a list of tools I have written from the time I started in vfx industry.
Starting from the first ever tool I wrote was more than a decade back. And that was not even in python.
In 2008-2009 I had a chance to learn and work on Apple Shake as a compositor while I was in Vancouver. that is more than a decade from now.
The first tool for Apple Shake was a macro that adds camera shake to a composite It was done while I worked on my first project with Landon and Blake.
Second macro I wrote was to handle multi-pass composite.
I uploaded my tools to creativehead.net .the website still exists but the interface is changed and I can't login to the website anymore.
Around the same time prior to ever working on Apple Shake, I also wrote tools for Maya using mel. I can recall the tool was called badApple as name given by Christopher Hartt. Since I come from technical background I wrote huge line of code . I don't remember what it does. But Chris was impressed.
Later when I moved to India and worked with Prime Focus I wrote tools for Eyeon Fusion using Lua. this tool was simple enough . It provided a prompt to browse image sequences and add them and generate a multi pass composite. There were few other tools I wrote but I do not remember anything from 2011.
Moving on from Prime Focus I taught myself python. and made my first Technical Artist showreel with tools written in python. That was Render Manager. using that showreel I landed for a job at MPC Bangalore. From that point on I wrote many tools for vfx . I basically became a 2d pipeline developer mostly writing tools for Foundry Nuke. I took over a application that worked as frontend integration of user with the backend asset management system . It was called hubNuke. In earlier days it integrated Nuke with hub and later it also added integration with MPC's new asset management system Tessa. I also wrote gizmos for nuke used by compositors at MPC.
Beside just writing tools for Nuke I also did some tools for Maya and Silhouette.
From there I moved to Basefx in Beijing where I worked in vfx company but my tasks were mostly devops writing tools for pipeline developers. I wrote a web application frontend for multisite Base-fx packages deployment. The frontend was done in Vue.Js framework and a rest api gateway done in flask-restplus. that was my first experience with web application development.
And after that I moved to London to work with Dneg where I worked on an application for Editorial / Layout department. Few frontend tools for maya using python for Animators. And Dneg's Popcorn framework for integrating nuke with Katana render dispatcher to dispatch 2d renders post lighting renders are done.
That was 2019 and here I am now at ILM in 2020 working on Shotgun and RV and lot of different things. I am happy I finally got to do some development related to shotgun I always wanted which I also asked for to do at basefx but I didn't got it, Here at ILM I wrote my first rvpackage for rv.
Hope more good developments come my way.
I finally have a good working simple example for HttpClient that works with Particle Photon. I have tested it on my particle photon and would like to document and share with everyone. I did faced one of the problem is that even when I included the HttpClient library into my experimental project it was continuously erroring pointing to glowfish library when I reckoned if glowfish was ever the dependency of httpClient library being included. So I have copied the httpClient.cpp and httpClient.h files along with my project.
take a look at the gitlab link of httpclient get example here:
In the previous company Basefx I worked in Beijing, China I got to play with socket. That was more than a year ago and I can clearly recall for sure we (the team) did not made use of systemd service or socket file path.
Maybe it was done by the system department (which is more likely because we were monitoring it with nigos web app) but the tool we had installed python scripts running socket based server to a location from where it was picked.
From last few days I had particle photon set up as tcp client to send room temperature and humidity to a tcp server running on a raspberry pi.
However, I initially picked up a node js example, since I have played alot with node js as well while I wrote web apps in vue js. And do these sort of things to refresh knowledge and not forget them.
Now I feel like doing it in python since python is one of my favourite programming toolkit. And I also need to write some more systemd service to run python code to fetch data from rest api on internet and post the data to mysql server.
Incase of my usage of sockets with particle photon I want to run tcp server and publish the temperature snd humidity data to mysql server.
What is sockets in computer terminology?
By definition: A socket is one endpoint of a two-way communication link between two programs running on the network. A socketis bound to a port number so that the TCP layer can identify the application that data is destined to be sent to. An endpoint is a combination of an IP address and a port number. So, Sockets allow communication between two different processes on the same or different or remote machines. To be more precise, it's a way to talk to other computers or devices using standard Unix file descriptors. In Unix, every I/O action is done by writing or reading a file descriptor. A file descriptor is just an integer associated with an open file and it can be a network connection, a text file, a terminal, or something else.
What is a socket file?
A socket is a special file used for inter-process communication, which enables communication between two processes. In addition to sending data, processes can send file descriptors across a Unix domain socket connection using the sendmsg() and recvmsg() system calls.
Have look at tcp server run through systemd service below.
Thanks for stopping by to read.
While learning and fixing things I use debugger alot.
Since its indispensable tool for me. I wanted to put my learning with the entire humanity .
For a quick experiment lets drop in a debugger breakpoint.
and then in other terminal window run telnet as shown below.
rlwrap enable up, down left and right keys work with rpdb, you do not need it with if you are using pdb.
It could get really annoying if you can’t run a multiple line code because usually it is multiple lines. To achieve that just run the below piece of can in pdb debugger in terminal.
That’s it and you land in a multiline python console inside of debugger.
Python has a great package, collections. In the past I used namedtuple to do mock up of Argparse. In this example I want to share how we can extend python dictionary to support dotted dictionary i.e. how we can access value of keys using dict object.key .
I decided to read about magic method again today. It is always good to revisit and refresh the learning.
They're special methods that you can define to add "magic" to your classes. They're always surrounded by double underscores (e.g. __init__ or __lt__).
What I want to focus here for magic methods that gets executed when an object is initialized from a class.
We all know the most basic magic method, __init__. It's the way that we can define the initialization behavior of an object. However, when I call x = SomeClass(), __init__ is not the first thing to get called. Actually, it's a method called __new__, which actually creates the instance, then passes any arguments at creation on to the initializer. At the other end of the object's lifespan, there's __del__. if you want to read more in detail checkout Refe Kettler's guide to python magic methods: https://rszalski.github.io/magicmethods/
Now why I wanted to talk about the magic method in relavance to those that get executed everytime a class is initialized is because I wanted to pen down some learning about Singleton pattern.
In the above code:
__new__ is a class method as well that supersedes the __init__ method (you have control on the object which gets created at this level)
_instance is a class attribute, not an instance attribute. So it's visible/available in the __new__ method before an instance is created.
So first time, the cls._instance argument is None, so __new__ creates the instance and stores the result in the _instance class attribute of cls (which is your Singleton class)
It is not None the second time because the reference of the instance has been stored, so it returns the same self._instance object. In the process, object.__new__ is only called once during the lifetime of the class.
This is the design pattern of the singleton: create once, return the same object every time.
While researching I also came across a SO link that talks about why BORG pattern is better than Singleton pattern.
https://stackoverflow.com/q/1318406/9567948 . The first answer gives the best and simplest explanation which is
If you subclass a borg, the subclass' objects have the same state as their parents classes objects, unless you explicitly override the shared state in that subclass. Each subclass of the singleton pattern has its own state and therefore will produce different objects.
Also in the singleton pattern the objects are actually the same, not just the state (even though the state is the only thing that really matters).
Furthermore, a class basically describes how you can access (read/write) the internal state of your object.In the singleton pattern you can only have a single class, i.e. all your objects will give you the same access points to the shared state. This means that if you have to provide an extended API, you will need to write a wrapper, wrapping around the singleton
In the borg pattern you are able to extend the base "borg" class, and thereby more conveniently extend the API for your taste.
Now are you with me, lets dig even more further into metaclasses
What are metaclasses?
Metaclasses are the 'stuff' that creates classes.
You define classes in order to create objects, right?
But we learned that Python classes are objects.
Well, metaclasses are what create these objects. They are the classes' classes, you can picture them this way:
You've seen that type lets you do something like this:
It's because the function type is in fact a metaclass. type is the metaclass Python uses to create all classes behind the scenes.
Now you wonder why the heck is it written in lowercase, and not Type?
Well, I guess it's a matter of consistency with str, the class that creates strings objects, and int the class that creates integer objects. type is just the class that creates class objects.
You see that by checking the __class__ attribute. Everything, and I mean everything, is an object in Python
Other Blogs & Pages
Animation & VFX SitesA MUST READ for Ani/VFX Artists
awakened by thoughts,