Big Data

The Top Five Software Development Trends Of 2018 – By Igor Lebedev, CTO of SONM

So far, 2018 has been a transformative year for developers. Software development has exceeded itself with new versions and higher quality solutions that make computerized tasks easier and more efficient than ever before. Databases are now capable of quantifying large pools of information without disruption, servers are sold as series instead of singular units, and blockchain has opened doors by using a decentralized system. These are the top five software development trends that are taking the technology world by storm this year:

Big Data

Constant growth of corporate and public data leads to a situation where traditional databases and storage instruments are no longer capable of using and managing data volume. The old approaches no longer work, having seen firsthand that RDBMS (Relational Database Management Systems) are no longer capable of holding everything. This leads to the emergence of new tools and approaches, but more importantly, this leads to the end of the dominance of traditional monolithic databases. The new approach is to store data shared across a multiple number of nodes. The core data is still stored in traditional centralized databases, but more data volume is stored separately and the share of monolithic databases is reduced. The challenge of big data in 2018, is that it requires you to rewrite your conventional applications in a new way so as to handle large data pools.

Horizontal Scalability

The traditional scaling solution was  always to purchase a newer, bigger server. This new server would have more cores, mode cache, higher frequencies, larger memory banks, faster busses, and more disks. However, this scaling solution has limitations and those limitations have already been reached. Common server chassis may have at most 2 or 4 CPUs, and you cannot add CPUs without limits as you cannot raise frequencies. At some point, vertical scaling reaches its limits. The next step is horizontal scaling. This means that instead of buying a bigger server to replace the old one, you buy one or more additional servers of the same type to add to your existing pool. This approach seems more flexible, but requires different software architecture, and again, requires you to rewrite software. If you do this, you receive the benefits of better resource management with the ability to share resources. Here we see microservices, stateless execution and Kubernetes as trends.

Decentralization

Changes we see in the world, bring new challenges. They may be connected with politics, delivery costs, trust, or market situations, but the trend remains that companies tend to decentralize their services and software. Content delivery networks deploy servers at your ISP, SaaS vendors open DCs in your country, and businesses think about catastrophe recovery. This leads to businesses no longer having one single main data centre, but two or more, while requiring their engineers to think about changing some aspects of their application architecture.

Fog Processing

The use of data processing increases every day. IoT generates more and more data on the edge of the network. This data is traditionally processed in DCs or in the cloud. However, taken the fact that modern optical lines are blazing fast and network throughput increases from year to year, the amount of data grows a number of ordinance faster. The networks are and always were the “bottleneck” of the global information processing. If not technically (by bandwidth and latency), then economically (by price per transmission). Currently, the fastest and cheapest way to migrate your data warehouse to another DC is to call a cargo vehicle and literally carry HDDs to a new location. No, it’s not a joke.

This leads to the industry’s founding idea of Edge and Fog processing – meaning that as much data as possible must be processed locally. Leading IT companies work on solutions to process and serve data near devices. This kind of processing is the Fog. There are difficulties to this. You cannot just copy code from the cloud and hope it works, because in the cloud all the data is accessible locally, while in the fog no single node has all the data and must ask another nodes around for required information. It requires application developers to adopt their architecture and rewrite code, so that tasks could be solved in the MPP (massive parallel processing) fashion. This gives one more reason to rewrite code and decentralised IaaS; until it becomes an attractive platform to host your newborn application.

Fault Tolerance

We live in an age where it is harder to sell something than it is to produce. The market concurrence raises and companies running software think about better SLA to have an edge and build trust from users. With the previously mentioned points, the probability of a single system component failure raises critically. One cannot rely on Oracle database cluster’s ability to handle from crash because much of the data lives outside. Instead of having a couple of super reliable servers with double power supply, there can now be 10 or even 100 servers and mathematical expectation of a single node failure reaches close to 1 (one, 100%). Something will probably fail and businesses have to be prepared for this again by looking at their application architecture. The world, and IT in particular is constantly changing and moving at an incredibly fast pace. Older approaches do not work. Companies are rewriting their software with big data, horizontal scalability, decentralization and fog processing all at the core. Engineers are constantly keeping in mind fault tolerances on the application side.

These software development trends have led to an emergence of new requirements towards hardware and its availability, which is less critical for single node failure, and with more options of horizontal scaling. As we look to the future, we anticipate a surplus of further developing trends, that will hopefully keep up with our evolving technology and the need for greater hardware support.

About Igor Lebedev:

Igor Lebedev is a specialist in computer science with more than 15 years of IT experience. His expertise lies in classic system architectures, system classes, databases, fog computing, automated trading systems, smart contracts, and microelectronics. Igor currently serves as the CTO and head of the SONM development team since July 2017. As CTO, Igor provides insight and valuable contribution to the product vision and overall development process, while continuously building a team of developers that create the fog computing, SONM project.

Comments
To Top

Pin It on Pinterest

Share This