# Changing fonts can save printer ink
# Email existed before the world wide web
# QWERTY Mobile was designed to slow you down
# 92 per cent of the world's currency is digital
# Domain name registrations were free till 1995
# In 1956, 5 megabytes (5MB) of data weighed a ton
# Russia built a computer that ran on water: in 1936
A mirror that responds to your presence, a mirror that is built to improve your daily productivity. The Smart Interactive Mirror helps you stay organized right from the start of the day. The time when you stand in front of the mirror while brushing your teeth or arranging your hair, the mirror detects your presence and provides you with the latest news, the schedule of the day and the weather conditions for your area.
The inspiration for building a smart mirror came after reading an article on how a Googler built his own smart mirror using the components available to him. Merging our own ideas and possibilities that are there with the smart mirror we embraced ourselves and started to work on the prototype that would soon be installed into the maker space of IEEE NIEC.he work started on Monday morning when we started jotting down the list of features to be integrated into our smart mirror and then progressed to assembling all the components together. As soon as the components were all assembled the next thing was to get the UI up and running. The major issue while designing the UI was to make it integrate into the mirror so that it merges with the color tone and doesn’t appear as distinct. After the UI part was done the next target was to integrate the different sensors into the mirror which were essential to make the mirror respond to the presence of the user. The first thing you will notice with the smart mirror is how it switches the UI when the user is in close proximity of the mirror. It took a whole lot of time to get the sen mirror and much more time calibrating it to detect the proximity of the user. After all this initial setup of the mirror it was time to integrate the display of current weather conditions and news feed for the user. Here the APIs from OpenWeatherMaps and New York Times came into rescue. The mirror updates the current weather conditions and newsfeed in realtime so whenever you see yourself in the mirror you get to see the latest updates The next thing to be done was to integrate the sen whether the door is open or not. This sensor can be installed at the doors essential to make the mirror respond to the presence of the user. The first thing you will notice with the smart mirror is how it switches the UI when the user is in close proximity of the mirror. It took a whole lot of time to get the sensor to work with the mirror and much more time calibrating it to detect the proximity of the user. After all this initial setup of the mirror it was time to integrate the display of current weather conditions and news feed for the user. Here the APIs from OpenWeatherMaps and New York Times came into rescue. The mirror updates the current weather conditions and newsfeed in realtime so whenever you see yourself in the mirror you get to see the latest updates. The next thing to be done was to integrate the sensor which detects whether the door is open or not. This sensor can be installed at the doors of the user premises and communicates with the mirror through a wireless link. Whenever the door is opened or closed the door status on the mirror is updated to reflect the same.The development of the mirror went on till Tuesday evening which included a night stay at the college to finish up the work.At the end we had something which we felt proud to build because there is a different fun in building stuff in real instead of just reading onto things. -by Saurabh Bhadhwar
This well known project inspire everyone to participate in such a field. It consists of Brushless DC motors, Propellers, ESCs (Electronic Speed Controller), Li-Po Battery(5000 mah), Flight Controller (cc3d), Frame, Transmitter-Receiver and of course an intention seeking look. It is very stable with cc3d beyond team’s estimation. Achieving higher flight time of about more than 35 minutes is making it trustworthy. The total weight lies within 1.4-1.6 kg. Range of transmitter receiver is about 1 Km. Open pilot is an open source used to calibrate the components with ground level. And for this project, in actual, sky is the limit.
This is the content that is dynamically being Collapsed.ject inspire everyone to participate in such a field. It consists of Brushless DC motors, Propellers, ESCs (Electronic Speed Controller), Li-Po Battery(5000 mah), Flight Controller (cc3d), Frame, Transmitter-Receiver and of course an intention seeking look. It is very stable with cc3d beyond team’s estimation. Achieving higher flight time of about more than 35 minutes is making it trustworthy. The total weight lies within 1.4-1.6 kg. Range of transmitter receiver is about 1 Km. Open pilot is an open source used to calibrate the components
Day by day, the technology is becoming an increasingly important part of our everyday life. Using new technology, everything at home can be automatically controlled and integrated. Thus, automating “things” in an ecosystem becomes much more popular nowadays. Google Home appears out of the blue within various assistances available with its tremendous advantage over others. It is much more knowledgeable with the ability to remember the previous question, has better physical controls and is application specific.
Artificial Intelligence and Machine Learning
Artificial Intelligence is generally intelligence exhibited by machines. Also termed as a technology that enables machines to converse with humans in natural language creating a new form of life. Can be thought of a broader concept of machines being able to carry out tasks in a way that we would consider “smart”. Machine Learning, which is a part of AI, may be defined as the ability of a computer program to learn from its own mistakes and produce better results with time.
Lets Integrate!! Google Home as a tool for AI engine Using an AI engine such as the one with ‘google home', provides me to integrate the IOT environment to the extent of best lively look where 'things' can actually communicate with todays' baud rate to achieve synchronization in between. Personal assistants are ushering in the age of AI at home. Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves and communicate thereafter. Implementation!! Actions on Google provides us with Conversational UX Platform for products and services - API.AI using which I have developed my own Agents and Intents within agents and their behavior as entities which made it learnt within minutes. This easy GUI based Machine Learning acquired by google is itself intellectual as I just have to utter some of the words taught in conjunction and it is able to understand and make up sentences accordingly. API.AI comes up with its own web simulator to provide a remarkable experience and after deploying the agent I created on the Hardware (google home device),we can have an enhanced user interaction. I could term the google home as an intellectual but mechanical brain within the room which would take action on saying a phrase after "Okay Google!". Making agents are full of fun. As of now I have made my google home to talk to me specifically to provide me with the best relevant choice it could find from a plethora of options. By specifically I mean particularly for deployment in kitchen, say. So I have made an agent named "Personal Chef" who will let me know the recipe of the dish I could make with the ingredients I already have. "Let me talk to personal chef" as already defined by me to invoke the specific agent under action. Similarly, for living room I have specifically made the agent named "smartHome" which will control the switching of lights, monitoring temperature and moisture inside the room etc. "Let me contact to smartHome" to invoke smartHome agent under action. These invocations can be multiple too. Having already created a centralized control and monitoring unit for home appliances on a real time basis using Google Home which can be accessible from any remote corner of the world, attaining each and every sensors real time reading in no time using publish/subscribe IOT protocols is not a big issue. Advantage of a pub/sub model such as MQTT is that it has a low payload specifically customized for wireless sensor networks with minimum data overhead. Workflow It contains a two way communication. A person as a client,I interact with my queries to google home which further makes a request to the server listening to the same port while passing a broker in between which is the heart of any publish/subscribe protocol. The server in response does something accordingly (say,to turn a light on wirelessly) and acknowledges google home with the phrase to convey as a part of the interaction. I have controlled the devices via node MCU esp8266 module after getting commands from google home device indirectly via cloud so as to be accessed from anywhere which makes it a full fledged integration for my smart IOT ecosystem with Google Home. Aspects of promising future An integrated IOT ecosystem has been presented, amalgamating wireless sensor network and a voice-automated AI engine. Sensors and things like doors, windows, appliances were made interactive where they could actually tell their present and past status to a web server. The sensor values are conveyed to a web page in real-time using web sockets. Using his/her voice, a consumer can not only automate but also make this system learn from its mistakes using Machine Learning algorithms.
-Himanshu Sahdev
There are times when we want to start of our career, we want to learn, explore and more importantly understand how things work. We start to build small projects to kick start with our thinking, and eventually we bump up into the question that what we are building is still not global, only our friends and limited number of people know about us and our work, we lack the knowledge of working upon real life projects which are being built by a large number of people around the world. This is where open source comes in, the spirit of open source lies in the fact that it provides freedom and allows a person to contribute according to their will and in the area of there interest. The knowledge and the support of community you get while contributing to open source is immense. There is a lot to learn, lot to explore and infinite possibilities.
Why Open Source? During the recent years, Open Source projects have gained a lot of traction, there are a lot of projects which are being used by a number of large corporations and even individual people to realize their products. One of the biggest Open Source project, The Linux Kernel project, involving thousands of contributors from all over the world, is being used in a wide range of areas for example powering the software stack of Supercomputers to running on a SoC being used in our handheld devices. Currently, there is an open source alternative to nearly every famous proprietary software we use, for example, we have Gimp as an alternative to Photoshop, there is LibreOffice in place of Microsoft Office and then there are projects like Wine, which provides us the functionality to run Windows software on Linux. How the process works? The Open Source projects work in a way that the people building it, put their source and ideas into the public domain, giving everyone the chance to tinker and use their projects. Usually this source is available through source tarballs or version control systems. Now, when a person who is interested in working on the project, he clones the source of the project, makes his modifications and uses the project. If the person feels that his modifications can benefit a larger set of audience, the person can send his code to the upstream project where after it get merges, will be available to every user of the project. Taking a real life example of this, consider what happens in Android. Android project uses Linux kernel as the core of the OS. At every release cycle of Android, the development team forks the kernel source from the mainstream Linux kernel release and integrate their patches into the release and build the kernel for Android. In return, they frequently submit patches to the mainstream kernel fixing bugs or adding architecture support. This is how the cycle goes on. What’s in for you? As a contributor you have a number of benefits while working for an Open Source project of your choice. Firstly, you get a chance to enhance your knowledge, the skills and all that with complete work freedom, i.e., contribute when you can. Secondly, behind every open source project is a great community which supports that project, continuously working on to improve the projects and onboard new people as contributors. The experience of working with these communities is great in itself, you get to interact with a lot of different people over online meetings, during workshops and community events. Not only does this help you make your voice heard but it also helps you develop your soft skills. Thirdly, when you contribute to as a newbie to an Open Source project, your contributions are reviewed by the community, which suggests you the improvements, give you tips on how to improvise, and these things help you a lot in becoming better in your work. Is there any recognition? Now, you might be thinking, as a contributor, will your work get recognized or will it get lost in the crowd. When you contribute to a project, you are directly affecting the way the people use the project. Currently, many of the big and medium sized open source projects have a recognition system which involves giving badges to the contributors who have done a significant amount of contributions. Above this, no one knows what your contributions may lead you to become, one day you just might be leading some team working on the same project. Where to start from? The first step in becoming a contributor to an open source project is to go and search about the project. Visit their website and try to figure out what you can help with. The next step in the process is to reach out the community through the project communication channels such as IRC or mailing lists where you should introduce yourself, mention your areas of interest and where will you like to contribute. Working on something but got struck? No issues. Head over to the project IRC and discuss your issues, still not satisfied? Drop a mail to the project mailing list. But remember one thing, be calm and polite to people. They are voluntarily giving their time to the project and no one likes being shouted upon.
SMART ROADS Necessity is the mother of all the inventions. Likewise advancing technologies lead to new ideas and thinking in budding engineers. If we observe our vicinity closely, where traffic jams have always been a problem for people in India, Google has developed the concept of smart cars. This urged us to think on Traffic management then ultimately named the project "Smart Roads".
This project is based on microcontroller(ours is atmega 16) and programming in embedded C. Features of our project: 1. Synchronization of traffic lights are being attained for a two way traffic. They are electronically operated. 2. LFR's are being used as cars which are being made intelligent using colour sensors. 3. Traffic light switch to next if it observes no vehicle is passing through it for a specific interval of time without exceeding its maximum time limit
Human Beings breathe, eat, drink, feel, text, call, mail, connect and GOOGLE to survive. Technology has become an indispensable part of our lives and irrespective of one’s profession, it has continued to marvel and support us in different spheres. It has always given us timely comfort and convenience when we need it the most. And that is exactly why End-to-End solution such as IOT has been one of the most popular developments we’ve ever had.
What actually is IOT? IOT, an acronym for ‘Internet of Things’ or some would call it ‘Internet of Everything’ is nothing but an ecosystem of DNA [Device Network Application] wherein various devices interact with each other, creating a network of their own. This network may connect with more networks, extending their reach in order to exchange meaningful information furthering into analysis of that data if desired. The term ‘IOT’ that has become one of the most trending topic in the tech world today, came with umpteen misconceptions. The first and the foremost misconception among the naïve is that it is a technology. On the contrary it is a concept or a model that existed long before [1999] we started realizing that it resides in the space that we live in. Take a walk into your daily lives and you’ll wonder the kind of devices we already use, has been an integral part of IOT. It starts with the space that we live in: - the smartphones that we use, the internet that we surf, the android applications providing a service such as the ones provided by OLA, Uber et al. Everything in a network A network may be defined as a group of devices having unique identities with the help of which a device send or receive data in a network. Take our houses for an instance: - Every house has a different address, with a different street address. This addressing system makes each one of our houses unique. The postal address of our homes help others by locating it on a network. The devices with assigned addresses work the same way. Wireless Sensor Network is an obvious yet inseparable part of IOT, serving as eyes and ears to a network. There may be sensors connected to the end devices. These sensor networks collect data from a surrounding, collecting physical parameters such as such as heat, motion, sound, light and what not, delivering it to a particular destination in a network. One can easily monitor and control those devices by not just being present in its vicinity but also from any remote corner of the world. Amazed huh?! And that is just one aspect of IOT, it has more in store for the world. Integrating Anything to Everything Integration of anything to everything remains one of the fascinating aspect of IOT. With all the imagination there is, just think of the craziest integrations and the chances are that it has already been done. From a small coin to automated machines, from the windows of our homes to the electrical outlets and appliances, one could abridge the gap between physical and digital world. IOT can be contemplated as a neural network in a cloud connecting billions of devices and that is just for starters. A dimension having endless possibilities is nothing less than a revolution in this technological era, evolved from its predecessor: - Machine-to machinecommunication. Market estimates more than a trillion devices connected to internet by the year 2020. A Smart way ahead Sit on the couch and relax. Wait!! Maybe the couch isn’t comfortable in handling your weight and so it convey its plight to the furniture around, or to yourself saying that you weigh more than 80 kilograms. Ever imagined your furniture to be that interactive?! It is eventually a recipe having various tastes and tongues. Most of the tech giants such as Microsoft, Google, Cisco et al knows the significance and power of connecting and so they have already started investing in this area. IOT is here to stay, not just because people have realized its importance but because they are now aware of the fact that it makes our surrounding smart and more comforting than ever before.
Dedicated hosting is a web hosting service provided to a client or a company with a server completely dedicated to it. Dedicated Server Hosting is basically nothing but a web-hosting plan offered by a firm, allocating a whole server to their clients. It caters to varied interest of a client or a company.
For an instance: - There are umpteen companies having different appetites that intend on choosing their own set of specifications such as memory and bandwidth in order to ensure a smooth functioning of their websites. That’s exactly where Dedicated Hosting make its mark. A particular hosting infrastructure is provided to the client, preventing a web-site from getting overwhelmed by the network traffic caused due to the services provided to other clients which is an impediment and a drawback in the case of Shared hosting. This is the reason why the clients of Dedicated hosting services doesn’t need to worry about a faulty server caused as a consequence of a denied access to resources.
A much hyped-upon Dedicated Server, is rumoured to have various machines wired and working together. To the disappointment of the individuals lacking information, a server merely runs on a computer that manage the resources provided to a client. It may be setting up of server, patching, providing firewall services to the client. Dedicated hosting offers far more resources and capabilities than a shared hosting service ever can. It is pretty much akin to an accommodated house having all the required facilities such as Electricity and water supply, butlers catering to the demands of the owner and the additional luxuries of partying around would be an icing on the cake. Dedicated Hosting further gets categorized into: - 1. Managed Dedicated Hosting 2. Unmanaged Dedicated Hosting Managed Dedicated Hosting In a Managed Dedicated Hosting, the web host company provides a server hosting package to the clients that leaves all the administering work to the web-hosts partially or completely as per the terms and conditions put forth by the host. All the glitches involved in the security, control panel or the operating system of the server are resolved by the server hosting provider. With ever-growing industrialisation, it is not necessary for the companies to manage and make efforts to setup and monitor their server on their own because that proves to be costly as trained technicians and engineers are needed for the job. Unmanaged Dedicated Hosting In an Unmanaged Dedicated Hosting, the client itself is responsible for the maintenance of a server and the glitches involved in the operation. Although the web-hosts are accountable for the hardware setup involved. This type of hosting proves to be a cost-effective method, given that the client possess a trained working staff. The help from the web-host is limited in this case. So an in-depth research is needed before opting for any of the aforementioned services. Dedicated web hosting service is rapidly gaining popularity among the companies that are directly or indirectly associated with E-commerce and Financial transactions mainly because of the security and the reliability that it provides to the client/company. Amid all the popularity that it has gained, there are a lot of myths that revolves around Dedicated Hosting. Some of them are: -
1. Web Hosting Providers exploit same types of servers Web-hosting is a booming field of technology and so thousands are working under its umbrella. It is a mere misconception that all the web-hosting providers use same servers. Service providers who claims to be efficient and better may deploy a better hardware and software setup in exchange of a good cost. While equivalent quality of service may not be available under a cheaper web-hosting package as they may exploit a degraded server setup. 2. A well-established hosting provider serves with a superior quality In any organization, as the clients grow in number, the support of the web-hosting company degrades in quality. They might have a large facility to meet the increasing demands of their clients, but still there is a large possibility that they do not provide a continuous support. And in these competing times, a company cannot afford to lose to the degraded quality of service. 3. Cost-efficient web hosting service is better in a longer term Being economical in one’s decision and needs is only but fair as far as a growing business is concerned. And so many of the entrepreneurs nowadays would want to go for a cost-effective solution for the servers hosted. In many cases it might be the right thing to do, but ideally a client should first analyse the services provided by a company. For example – there are feedbacks and forums given on the internet which could easily give an idea of the Quality of Service offered and provided. 4. Managed Web-hosting is a secure method One of the biggest myths of Dedicated Web hosting is that many clients would easily get deceived by the security measures that are usually claimed by the service providers. Though several steps are taken by the service providers against threats such as malwares, DDoS attacks et al. And moreover no service is 100 percent secured. Keeping in mind, all the aspects of a client’s requirement, one should analyse different dimensions of a technology so as to get efficient and reliable services.
Introduction
Ever wondered how you can peep into your friend’s directory and exchange data without even touching his system? Come aboard the ship of FTP [File Transfer Protocol] and you will see how one can easily transfer files from a system, sitting in a far-off place. FTP is a standard network protocol whose specification was written by Abhay Bhushan, published on 16th April 1971.
Description:
File Transfer protocol is a set of instructions that define the way of transferring data over a TCP/IP network. Before getting into any of the recipes of FTP, one should have an understanding of some basic networking jargons such as Client-Server model itself that stands as a backbone of networking. Client-Server Model: - It is a networking model that elucidate all the request and responses of the system. This is akin to an individual requesting information [Client] from another individual holding the information [Server]. The client set up a connection with a server first, the server respond if it accepts the request and the data requested by the client is provided by the server. FTP falls under the aforementioned model. The main motive of FTP [File Transfer Protocol] is to allow an access to the files present on a remote system. The Transmission in FTP takes place in the channels. A channel is basically defined as a path through which information can be transferred from one place to another. These channels are of two types:- 1. Channel for commands – All the commands that are involved in the FTP operations use this channel. The Server and Client issue their respective commands through this channel. 2. Channel for data – This channel is specifically meant for data transmission. All the files that are transferred from one system to another, use this channel. Server and Client, run two types of processes concurrently:- 1. DTP – DTP [Data Transfer Process] is a bunch of activities responsible for establishing and managing a connection between the server and the client. The DTP of server interacts with the DTP of client. 2. PI – PI is an acronym for Protocol Interpreter that manages the control action on server and client as well. However these protocols have different rules and implementations on the server and client. The user on the client side has a GUI [Graphical user interface], meant for the interaction of a user with its machine that ultimately communicates with the server. Before getting on with any type of communication between two or more systems, there is one thing common to all – Initiating a Connection. The FTP client set up and initiate a connection to the FTP server. The FTP client send commands to the server which analyse and send back the response of that command. It is pretty much akin to the conversation that human beings have. Person 1 Hello! Are you there? Person 2 Yes I am right here. How are you? On establishing the connection between server and the client, the server-PI gives back port address on which all the transmission data will be carried out. Then the client, just like a good listener, listens on that particular port. One of the most amazing things of FTP is that it does not only allow file transfer between a server and client but also one can transfer file between two servers. One could easily observe this illustration below: - The request of transferring files between two servers is relayed through another client as given above. With the help of FTP commands, one could easily specify ports and the commands that are used. The three types of FTP commands involved are: - 1. Access control commands. 2. The action to be performed [Retrieve, Store, delete]. 3. FTP service commands [To abort transfer, append etc.] There are softwares built for the task, such as FileZilla for windows. Windows users can quickly get connected to the server with the help of a port and an IP address. FTP is a solution to many problems, especially the ones where you want to access the files of a remote system that runs on a different operating system. Now that’s something!! Isn’t it?