Connect with us

Featured

Why solar isn’t soaring

Given the current power challenges South Africans face, it makes sense for many to make use of rooftop solar panels. However, the uptake has been really slow due to the installation price, ROI and problems linking the panels into the current electricity grid, writes KEVIN NORRIS and DAVE SMITH of the Jasco Group.

Given the current power challenges in South Africa, as well as a growing trend toward solutions for sustainable electricity, solar technology as a source of energy supply has become a hot topic, particularly for organisations wishing to reduce their reliance on utility power sources. Rooftop solar photovoltaic (PV) plants can help organisations generate their own power, and using grid tie inverter systems enables them to feed excess generated power back to the utility for use elsewhere. However, despite the benefits of such systems, there are two common challenges that have emerged. Firstly, PV plants are a costly investment, and the Return On Investment (ROI) has in the past taken many years to realise, although this is changing as the cost of installation reduces and electricity tariffs continue to increase. This makes obtaining funding for such systems difficult. Secondly, there remain several issues with the connection of solar plants to the main grid, which has slowed the uptake of these solutions. Addressing these challenges is key to harnessing the power of the sun as an alternate, sustainable energy source.

Grid tie solar systems are the simplest and most cost effective method for utilising solar energy as a replacement for day-to-day power requirements. On a very basic level, the grid tie invertor converts the direct current (DC) power generated by solar panels, into the alternating current (AC), and injects this AC current into the existing load. Any excess energy is then fed into the power distribution network. The inverter is also able to ensure that energy requirements are drawn from available solar power first, and only utilise utility supply should there be a solar shortfall. This system does not necessarily require a battery for energy storage, although this will extend functionality, so the installation is very simple and efficient, and maintenance is low. However, while the cost of manufacturing solar PV panels and grid tie inverters has reduced over the past few years, as a result of increased demand, greater economies of scale and technological advancements, solar remains a costly solution to implement. The high cost of raw materials and the high-tech conditions required for the manufacture of components keep these solutions out of reach of the average homeowner or business.

Justifying this investment is often one of the biggest challenges to the implementation of solar power solutions, and obtaining loans and funding is typically a difficult sell. ROI takes a few years to realise, and the investment will only typically pay for itself within six to 10 years. The rate of return is dependent on a number of factors, including the type of installation and the existing tariff with the utility. However, what needs to be kept in mind is that solar PV systems have a predictable performance curve of 25 years and a usable life of 35 years. In addition, using a grid tie inverter system, homeowners and businesses will one day be able to feed excess power back to the grid, either offsetting this against utilisation costs or selling this power to the utility provider. PV systems therefore should not be seen as a depreciating asset. They are in fact an asset that not only reduces current costs, but in the long run could be a significant income generator for the owner.

To quantify this value is a relatively simple mathematical exercise with the assistance of financial models. In 2015 the average cost of electricity per kilowatt-hour (kWh) is similar to the Lifecycle Levelised Cost of Energy (LLCE) of a typical grid tie system at around R1.00 per kWh. This means that, calculated over the complete guaranteed performance lifespan of the panels (approximately 25 years), the cost per kWh from a solar PV system will be similar to the municipal cost in 2015. Going forward the cost of electricity from the utility is very likely to increase significantly year on year, while the cost of the installed PV system will remain at its installed price plus the minimal cost of maintenance. If you look at this over the next 10 years, your cost of solar generation would be around R1.00 per kWh, while the utility cost is forecast to be as high as R3.50 per kWh.

This same trend is likely to continue over the lifespan of the solar PV system. If you project these increases over the 25-year period, the cost difference between now and then would be significant. Effectively, within this period, the solar PV solution could still be generating electricity at R1.00 per kWh, whereas by that stage the cost of utility power will doubtless have increased many times. It is these future differences in the cost of energy between the utility costs and the fixed solar PV cost that should be recognised as part of the long-term sustainability of owning such an asset. Additionally, in most cases the asset is attached to a building and would result in improved valuation of the building. Not only does this have a positive financial implication, it also has an environmental implication, especially when one considers the Carbon Tax that will be levied as of 2016. The only way to negate the carbon tax is to either recycle or produce “Green kWh” from a renewable source like solar PV.

In order to drive adoption of solar PV solutions, it is necessary for financial institutions to recognise their value and assist businesses and homeowners with funding these systems. Forward-thinking financial institutions should look to leverage the security of a loan for solar PV power against the asset itself, as it will pay for itself many times over in years to come. The asset could also be recognised as part of the building itself and be financed utilising an extension of the building bond. In addition, government needs to come on board by assisting financial institutions with tax rebates for their efforts in financing Solar PV systems. This is sound strategy, as by funding these systems, financial institutions are contributing to the overall reduction in carbon output and, more importantly, helping to resolving the country’s current energy shortages.

In addition to funding, connecting to the utility remains a challenge. One of the most pressing issues is the nature of pure solar solutions (without energy storage capability), in that they are only able to produce energy during daylight hours, and the energy must be used or dumped. For the majority of residential applications where nobody is at home during the day, this generated power will be wasted if a solution to feed this power back into the grid cannot be resolved. Connection codes therefore need to be finalised, and metering for two-way energy flow needs to be implemented. It is also important to find a solution to the problem of optimising the use of all renewable energy generated to the advantage of both the end-user and the utility providers.

The concept of net metering, whereby users sell their excess renewable energy back to the utility for credit and utilise these credits when the renewable source experiences shortfall (such as at night when there is no sun to power solar PV systems) is one that has great potential to benefit all parties concerned. For most residential applications, this form of energy trading works well. Some utilities may limit the amount of energy you can sell back for credits to the amount of utility energy used (i.e. if you use 2,000 kWh per month, than you may only sell back a maximum of 2,000 kWh per month). Another system would be to annualise this amount, enabling owners to make better use of the credits throughout the year, such as in winter where generation may not match overall consumption.

Theoretically, users could manage consumption and generation of energy to a zero balance and not have to spend a cent on energy from the utility for the year. This idea in principle is appealing, particularly for consumers and business, however for utilities this could cause problems. If renewable energy customers are not paying what they used to pay for electricity, but rather supplementing their own power generation with utility power, how does the utility find revenue to pay for the maintenance of the generation, transmission and distribution network the entire system uses? Feed in tariffs have been suggested as one solution to this problem, whereby the utility purchases the excess energy from providers, while users still purchase utility power, and there is no obligation to consume at the same rate as you sell energy.

Regardless of the challenges involved, solar PV remains the most viable and cost effective alternate energy source for South Africa, a country that experiences significant hours of sunshine for much of the year in the majority of its regions. If these problems can be satisfactorily resolved and solar becomes a mainstream power generation source, not just for the utility but for business and homeowners too, the currently bleak power prospects of South Africa may have a brighter future after all.

* Kevin Norris, Consulting Solutions Architect, Renewable Energy, and Dave Smith, Managing Director, Renewable Energy, The Jasco Group

Featured

What’s left after the machines take over?

KIERAN FROST, research manager for software in sub-Saharan Africa for International Data Corporation, assesses AI’s impact on the workforce.

One of the questions that we at the International Data Corporation are asked is what impact technologies like Artificial Intelligence (AI) will have on jobs. Where are there likely to be job opportunities in the future? Which jobs (or job functions) are most ripe for automation? What sectors are likely to be impacted first? The problem with these questions is that they misunderstand the size of the barriers in the way of system-wide automation: the question isn’t only about what’s technically feasible. It’s just as much a question of what’s legally, ethically, financially and politically possible.

That said, there are some guidelines that can be put in place. An obvious career path exists in being on the ‘other side of the code’, as it were – being the one who writes the code, who trains the machine, who cleans the data. But no serious commentator can leave the discussion there – too many people are simply not able to or have the desire to code. Put another way: where do the legal, financial, ethical, political and technical constraints on AI leave the most opportunity?

Firstly, AI (driven by machine learning techniques) is getting better at accomplishing a whole range of things – from recognising (and even creating) images, to processing and communicating natural language, completing forms and automating processes, fighting parking tickets, being better than the best Dota 2 players in the world and aiding in diagnosing diseases. Machines are exceptionally good at completing tasks in a repeatable manner, given enough data and/or enough training. Adding more tasks to the process, or attempting system-wide automation, requires more data and more training. This creates two constraints on the ability of machines to perform work:

  1. machine learning requires large amounts of (quality) data and;
  2. training machines requires a lot of time and effort (and therefore cost).

Let’s look at each of these in turn – and we’ll discuss how other considerations come into play along the way.

Speaking in the broadest possible terms, machines require large amounts of data to be trained to a level to meet or exceed human performance in a given task. This data enables the bot to learn how best to perform that task. Essentially, the data pool determines the output.

However, there are certain job categories which require knowledge of, and then subversion of, the data set – jobs where producing the same ‘best’ outcome would not be optimal. Particularly, these are jobs that are typically referred to as creative pursuits – design, brand, look and feel. To use a simple example: if pre-Apple, we trained a machine to design a computer, we would not have arrived at the iMac, and the look and feel of iOS would not become the predominant mobile interface. 

This is not to say that machines cannot create things. We’ve recently seen several ML-trained machines on the internet that produce pictures of people (that don’t exist) – that is undoubtedly creation (of a particularly unnerving variety). The same is true of the AI that can produce music. But those models are trained to produce more of what we recognise as good. Because art is no science, a machine would likely have no better chance of producing a masterpiece than a human. And true innovation, in many instances, requires subverting the data set, not conforming to it.

Secondly, and perhaps more importantly, training AI requires time and money. Some actions are simply too expensive to automate. These tasks are either incredibly specialised, and therefore do not have enough data to support the development of a model, or very broad, which would require so much data that it will render the training of the machine economically unviable. There are also other challenges which may arise. At the IDC, we refer to the Scope of AI-Based Automation. In this scope:

  • A task is the smallest possible unit of work performed on behalf of an activity.
  • An activity is a collection of related tasks to be completed to achieve the objective.
  • A process is a series of related activities that produce a specific output.
  • A system (or an ecosystem) is a set of connected processes.

As we move up the stack from task to system, we find different obstacles. Let’s use the medical industry as an example to show how these constraints interact. Medical image interpretation bots, powered by neural networks, exhibit exceptionally high levels of accuracy in interpreting medical images. This is used to inform decisions which are ultimately made by a human – an outcome that is dictated by regulation. Here, even if we removed the regulation, those machines cannot automate the entire process of treating the patient. Activity reminders (such as when a patient should return for a check-up, or reminders to follow a drug schedule) can in part be automated, with ML applications checking patient past adherence patterns, but with ultimate decision-making by a doctor. Diagnosis and treatment are a process that is ultimately still the purview of humans. Doctors are expected to synthesize information from a variety of sources – from image interpretation machines to the patient’s adherence to the drug schedule – in order to deliver a diagnosis. This relationship is not only a result of a technicality – there are ethical, legal and trust reasons that dictate this outcome.

There is also an economic reason that dictates this outcome. The investment required to train a bot to synthesize all the required data for proper diagnosis and treatment is considerable. On the other end of the spectrum, when a patient’s circumstance requires a largely new, highly specialised or experimental surgery, a bot will unlikely have the data required to be sufficiently trained to perform the operation and even then, it would certainly require human oversight.

The economic point is a particularly important one. To automate the activity in a mine, for example, would require massive investment into what would conceivably be an army of robots. While this may be technically feasible, the costs of such automation likely outweigh the benefits, with replacement costs of robots running into the billions. As such, these jobs are unlikely to disappear in the medium term. 
Thus, based on technical feasibility alone our medium-term jobs market seems to hold opportunity in the following areas: the hyper-specialised (for whom not enough data exists to automate), the jack-of-all-trades (for whom the data set is too large to economically automate), the true creative (who exists to subvert the data set) and finally, those whose job it is to use the data. However, it is not only technical feasibility that we should consider. Too often, the rhetoric would have you believe that the only thing stopping large scale automation is the sophistication of the models we have at our disposal, when in fact financial, regulatory, ethical, legal and political barriers are of equal if not greater importance. Understanding the interplay of each of these for a role in a company is the only way to divine the future of that role.

Continue Reading

Featured

LG unveils NanoCell TV range

At the recent LG Electronics annual Innofest innovation celebration in Seoul, Korea, the company unveiled its new NanoCell range: 14 TVs featuring ThinQ AI technology. It also showcased a new range of OLED units.

The new TV models deliver upgraded AI picture and sound quality, underpinned by the company’s second-generation α (Alpha) 9 Gen 2 intelligent processor and deep learning algorithm. As a result, the TVs promise optimised picture and sound by analysing source content and recognising ambient conditions.

LG’s premium range for the MEA market is headlined by the flagship OLED TV line-up, which offers a variety of screen sizes: W9 (model 77/65W9), E9 (model 65E9), C9 (model 77/65/55C9) and B9 (model 65/55B9).

NanoCell is LG’s new premier LED brand, the name intended to highlight outstanding picture quality enabled by NanoCell technology. Ensuring a wider colour gamut and enhanced contrast, says LG, “NanoColor employs a Full Array Local Dimming (FALD) backlight unit. NanoAccuracy guarantees precise colours and contrast over a wide viewing angle while NanoBezel helps to create the ultimate immersive experiences via ultra-thin bezels and the sleek, minimalist design of the TV.”

The NanoCell series comprises fourteen AI-enabled models, available in sizes varying from 49 to 77 inches (model 65SM95, 7565/55SM90, 65/55/49SM86 and 65/55/49SM81).

The LG C9 OLED TV and the company’s 86-inch 4K NanoCell TV model (model 86SM90) were recently honoured with CES 2019 Innovation Awards. The 65-inch E9 and C9 OLED TVs also picked up accolades from Dealerscope, Reviewed.com, and Engadget.

The α9 Gen 2 intelligent processor used in LG’s W9, E9 and C9 series OLED TVs elevates picture and sound quality via a deep learning algorithm (which leverages an extensive database of visual information), recognising content source quality and optimising visual output.

The α9 Gen 2 intelligent processor is able to understand how the human eye perceives images in different lighting and finely adjusts the tone mapping curve in accordance with ambient conditions to achieve the optimal level of screen brightness. The processor uses the TV’s ambient light sensor to measure external light, automatically changing brightness to compensate as required. With its advanced AI, the α9 Gen 2 intelligent processor can refine High Dynamic Range (HDR) content through altering brightness levels. In brightly lit settings, it can transform dark, shadow-filled scenes into easily discernible images, without sacrificing depth or making colours seem unnatural or oversaturated. LG’s 2019 TVs also leverage Dolby’s latest innovation, which intelligently adjusts Dolby Vision content to ensure an outstanding HDR experience, even in brightly lit conditions.

LG’s audio algorithm can up-mix two-channel stereo to replicate 5.1 surround sound. The α9 Gen 2 intelligent processor fine-tunes output according to content type, making voices easier to hear in movies and TV shows, and delivering crisp, clear vocals in songs. LG TVs intelligently set levels based on their positioning within a room, while users can also adjust sound settings manually if they choose. LG’s flagship TVs offer the realistic sound of Dolby Atmos for an immersive entertainment experience.

LG’s 2019 premium TV range comes with a new conversational voice recognition feature that makes it easier to take control and ask a range of questions. The TVs can understand context, which allows for more complex requests, meaning users won’t have to make a series of repetitive commands to get the desired results. Conversational voice recognition will be available on LG TVs with ThinQ AI in over a hundred countries.

LG’s 2019 AI TVs support HDMI 2.1 specifications, allowing the new 4K OLED and NanoCell TV models to display 4K content at a remarkable 120 frames per second. Select 2019 models offer 4K high frame rate (4K HFR), automatic low latency mode (ALLM), variable refresh rate (VRR) and enhanced audio return channel (eARC).

To find out more about LG’s latest TVs and home entertainment systems, visit https://www.lg.com/ae.

Continue Reading

Trending

Copyright © 2019 World Wide Worx