Category: Technology

Technology – The collection of techniques, skills, methods, and procedures used in the production or production of goods or services, is the use of scientific knowledge for practical purposes, especially in the industry.

The technology has many effects. It has helped to develop more advanced economies (including today’s global economy) and allowed the rise of a holiday class. Many technical processes produce unwanted sub-products known as pollution and reduce natural resources to damage the Earth’s environment. Innovations have always influenced the values ​​of society and raised new questions about the ethics of technology. Examples include the rise of perception of efficiency in terms of human productivity and the challenges of bioethics.

  • Essay on the Additive Manufacturing (AM)

    Essay on the Additive Manufacturing (AM)

    Additive Manufacturing (AM) is a general word for all technologies that have a parts-by-layer accumulation of material at the micron level, to perform the needed shape, except for the metal removal process which is a standard subtractive process. How to define Additive Manufacturing (AM)? It is a creative process in which an object produce layer by layer in an option design. A 3D model made using a computer-aided design (CAD), i.e. 3D scanning, exists cut into individual layers that provide the tool path code for a 3D printing machine at that point. Based on the specific software, the machine performs a parallel process that replicates the model from the base to the top until the object finishes.

    Here is the article to explain, What are the Additive Manufacturing (AM) Technologies?

    AM technologies can exist broadly divided into three types. The first of which is sintering whereby the material heat without standing liquified to create complex high-resolution objects. Direct metal laser sintering uses metal powder whereas selective laser sintering uses a laser on thermoplastic powders so that the particles stick together.

    The second AM technology fully melts the materials, this includes direct laser metal sintering which uses a laser to melt layers of metal powder, and electron beam melting, which uses electron beams to melt the powders. The third broad type of technology is stereolithography, which uses a process called photopolymerization; whereby an ultraviolet laser stands fired into a vat of photopolymer resin to create torque-resistant ceramic parts able to endure extreme temperatures.

    Additive manufacturing technology, commonly referred to as 3D printing, has captured our overall creativity, producing wild visions of 3D-printed aircraft and bio-printed organs. Even though innovation guarantees the eventual destiny of assembly; it already has a great impact on our immediate environment, but these visions are still far from existing fully realized. Whether the effects will occur in the immediate future or the long term, 3D printing will change how things stand done.

    Types of 3D printing;

    There are seven different types of 3D printing procedures dealing with:

    • Binder jetting: A procedure that occurs when a liquid bonding agent place on a powder bed.
    • Direct Energy Deposition: where the metal is liquified onto a substrate layer by layer.
    • Physical extrusion: content deposited from an extruder on a substratum usually liquified by a heating mechanism by a thermoplastic filament.
    • Material jetting: Materials that hardened by ultraviolet light, for example, photopolymer.
    • Powder bed fusion: the process whereby an energy source such as a laser or an electron beam steer to a powder bed to heat the individual particles until they melted together.
    • Sheet lamination: a process where sheets of material combined, with the coveted shape carved into each shape.
    • VAT photopolymerization: the resin of the photopolymer exposed to an energy source such as a laser beam that solidifies the material bit by bit.

    3D Printing Innovations;

    Fused Deposition Modelling (FDM);

    In the late 1980s and mid-1990s, a few organizations introduced new non-SL technologies the develop 3D commercial center. FDM is an extrusion-based process in which thermoplastic heat to its melting point as filament spools and deposits in a substratum. Thermoplastics are different from thermosets and may exist melted and cooled several times.

    FDM requires a thermal extrusion process which is why the process is notable for producing strong parts which serve a more functional purpose of the processes made with a printing technology called stereolithography (SL) which is a form of VAT photopolymerization. These FDM parts formed in industries with performance-critical applications such as in Spacecraft Industries.

    Selective Laser Sintering (SLS);

    A powder bed modified software that selectively combines plastic powder with a laser into complete 3D objects. The process is exceptional in that the printing framework acts as integrated support for unlike FDM sintered parts that require 3D printing of support structures. This allows the printing of very complex geometries, including interlocking and moving parts.

    Binder Jetting;

    Although material jetting may not have existed entirely conquered by the 3D framework; it has existed dominated by another colorful 3D printing method. This process uses piezoelectric inkjet print heads: however, instead of keeping the photosensitive ink, it stores a fluid-restricting agent that results in sandstone-like prints in full color.

    Metal 3D Printing;

    Although 3D printing plastics can become invaluable to many industries, manufacturers of aerospace and defense exist keenly interested in development.

    Direct Energy Deposition (DED);

    Also known as laser cladding, which requires the addition of metal powder to a source of heat that melts particles when deposited. Due to the ability of the technology to inject metal powder directly into the heat source often attached to a 4 or 5-pivot arm, DED systems exist not limited to 3D printing with a level substratum. It can exist conceived instead on bent surfaces with existing metal structures. For this reason, laser cladding in the aerospace industry exists often used to repair damaged parts. Likewise, DED machines as a print volume may not exist limited

    Powder Bed Fusion;

    Unlike DED systems, powder bed machines stand housed in a high-powered energy source inert gas chamber, usually, a laser that melts metal particles layer by layer, similar to the plastic SLS process. Electron beam melting is a special category of SLM technology that relies on an electron beam instead of a laser which makes construction time much faster. This technology can exist better suited to the production of finely detailed parts in small lots when the machine is large enough.

    Applications of Additive Manufacturing (AM);

    Although many of the newest technologies are now on the market, many of the processes mentioned stand widely used for rapid prototyping, auxiliary production, and the manufacture of finished parts.

    Visual and Functional Prototypes;

    Manifesting physical 3D printing pre-production plans was a quick prototyping technique. 3D printing can be a quicker and more precise technique than craftsmanship as a design.

    The different designs mentioned above are suitable for different prototyping applications such as SL and DLP for fine features; although they may be fragile, they reflect the details that stand included in the end product and FDM for mechanical testing. PolyJet can reflect the real properties of the material, including rubber flexibility of glass transparency.

    Tooling;

    These technologies can exist used to produce secondary products. For example, many processes exist used to print 3D objects that help to create metal parts such as tooling and investment casting models.

    Tooling defines as any type of part that stands specialized in the production of a particular component. Examples of tooling include a shape that can exist used to frame the end part from the raw material; a hop designed to hold a part while other processes, such as assembly drilling; cutting tools.

    Pros or Benefits of additive manufacturing:

    Similar to standard 3D printing, AM allows for the creation of bespoke parts with complex geometries and little wastage. Ideal for rapid prototyping, the digital process means that design alterations can exist done quickly and efficiently during the manufacturing process. Unlike with more traditional subtractive manufacturing techniques, the lack of material wastage provides cost reduction for high-value parts; while AM has also existed shown to reduce lead times.

    In addition, parts that previously required assembly from multiple pieces can stand fabricated as a single object; which can provide improved strength and durability. AM can also exist used to fabricate unique objects or replacement pieces where the original parts exist no longer produced. The following advantages below are;

    Material waste reduction;

    In conventional manufacturing processes, the material exists typically removed from a larger piece of work; think timber milling or cutting shapes from sheets of steel. In contrast, AM starts from scratch, adding material to create a component or part. By using only the substance necessary to create that part, AM ensures minimal waste. AM also reduces the need for tooling, therefore limiting the amount of material needed to produce components.

    Manufacturing and assembly;

    A significant benefit of additive manufacturing is the ability to combine existing multi-part assemblies into a single part. Instead of creating individual parts and assembling them at a later point, additive manufacturing can combine manufacturing and assembly into a single process. Effectively consolidating manufacture and assembly into one.

    Part flexibility;

    Additive manufacturing is appealing to companies that need to create unusual or complex components that are difficult to manufacture using traditional processes. AM enables the design and creation of nearly any geometric form, ones that reduce the weight of an object while still maintaining stability. Part flexibility is another major waste reduction aspect of AM. The ability to develop products on-demand inherently reduces inventory and other waste.

    Legacy parts;

    AM has gifted companies the ability to recreate impossible-to-find, no longer manufactured, legacy parts. For example, the restoration of classic cars has greatly benefited from additive manufacturing technology. Where legacy parts were once difficult and expensive to find; they can now exist produced through the scanning and X-ray analysis of original material and parts. In combination with the use of CAD software, this process facilitates fast and easy reverse engineering to create legacy parts.

    Inventory stock reduction;

    AM can reduce inventory, eliminating the need to hold surplus inventory stock and associated carrying costs. With additive manufacturing, components exist printed on-demand, meaning there is no over-production, no unsold finished goods, and a reduction in inventory stock.

    Energy savings;

    In conventional manufacturing, machinery and equipment often require auxiliary tools that have greater energy needs. AM uses fewer resources, has less need for ancillary equipment, and thereby reduces manufacturing waste material. AM reduces the number of raw materials needed to manufacture a product. As such, there is lower energy consumption associated with raw material extraction, and AM has fewer energy needs overall.

    Customization;

    AM manufacturing offers design innovation and creative freedom without the cost and time constraints of traditional manufacturing. The ability to easily alter original specifications means that AM offers greater opportunities for businesses to provide customized designs to their clients. With the ease of digitally adjusting the design, product customization becomes a simple proposition. Short production runs exist then easily tailored to specific needs.

    Cons or Drawbacks of additive manufacturing:

    The following disadvantages below are;

    Production costs;

    Production costs are high. Materials for AM exist frequently required in the form of exceptionally fine or small particles; that can considerably increase the raw material cost of a project. Additionally, the inferior surface quality often associated with AM means there is an added cost to undertake any surface finishes and the post-processing required to meet quality specifications and standards.

    Cost of entry;

    With additive manufacturing, the cost of entry is still prohibitive to many organizations and, in particular, smaller businesses. The capital costs to purchase necessary equipment can be substantial and many manufacturers have already invested significant capital into the plant and equipment for their traditional operations. Making the switch is not necessarily an easy proposition and certainly not an inexpensive one.

    Additional materials;

    Currently, there is a limit to the types of materials that can stand processed within AM specifications and these are typically pre-alloy materials in a base powder. The mechanical properties of a finished product exist entirely dependent upon the characteristics of the powder used in the process. All the materials and traits required in an AM component have to exist included early in the mix. It is, therefore, impossible to successfully introduce additional materials and properties later in the process.

    Post-processing;

    A certain level of post-processing exists required in additive manufacturing; because surface finishes and dimensional accuracy can be of a lower quality compared to other manufacturing methods. The layering and multiple interfaces of additive manufacturing can cause defects in the product; whereby post-processing exists needed to rectify any quality issues.

    It’s slow;

    As mentioned, additive manufacturing technology has been around since the eighties, yet even in 2021; AM stands still considered a niche process. That is large because AM still has slow build rates and doesn’t provide an efficient way to scale operations to produce a high volume of parts. Depending on the final product sought, additive manufacturing may take up to 3 hours to produce a shape that a traditional process could create in seconds. It is virtually impossible to realize economies of scale.

    Principles of Additive Manufacturing;

    AM technologies fabricate models by fusing, sintering, or polymerization of materials in predetermined layers with no need for tools. AM makes possible the manufacture of complex geometries including internal part detail that is approximately not possible to manufacture using machining and molding processes, because the process does not require predetermined tool paths, draft angles, and undercuts.

    In there, the layers of a model are formed by slicing CAD data with professional software. All AM system work on the same principle; however, layer thickness depend upon parameters and machine being used, and the thickness of the layer range from 10µm up to 200µm. Layers are visible on the part surface in AM operation, which controls the quality of the final product. The relation between the thickness of the layer and surface orientation is known as the staircase effect. However, the thinner the layer is the longer the processing time and the higher the part resolution.

    Creative;

    Layers in AM are built up at the top of the previous one in the z-axis. After the layer gets processed the work platform is dropped down by the single-layer thickness on the z-axis and the fresh material layer is recoated differently for several other methods. In a resin-based system traversing edge flatten the resin, in a powder-based system deposited powder is spread using a roller or wiper, in some systems the material is deposited through a nozzle that deposits the required material. Because recoating time is even longer than the layer processing time. For that sake, multiple parts are built together in the time of single material recoating build. Different software is available to position and orient parts so that a maximum number of parts can be built together. Available software is VISCOM RP and Smart Space used in MAGICS.

    Some delicate parts produced through AM technologies need a support structure to hold the part in the work platform during the build process. All AM machine uses different support structure that is designed from specific material for effective use of build parts. Commonly used support structures are thin small pointed teeth to minimize the part contact so that they can be removed easily with the hand tools.

    Essay on the Additive Manufacturing (AM) Image
    Essay on the Additive Manufacturing (AM)
  • Essay on the Global Positioning System (GPS)

    Essay on the Global Positioning System (GPS)

    The Global Positioning System (GPS) is a satellite-based positioning technique that owns by the United States administration and by the Air Force of the United States. The fundamental technique of GPS is to amount the ranges between the receiver and at the same time ascertain observed satellites. The positions of the satellites exist forecasted and broadcasted alongside the GPS signal to the user. Through the many identified positions (of the satellites) and the measured distances between the receiver and the satellites, the position of the receiver can determine. The difference in position, which can exist also determined concerning time, is then the velocity of the receiver.

    Here is the article to explain, How to define the Global Positioning System (GPS)?

    The first global positioning system receivers were very simple and basic. They used monochrome screens and only relayed basic information like latitude and longitude. Over the years, the next generation brought more user-friendly map-based location devices with color screens. Furthermore, the receiver and other component prices also decreased over time, making the use of GPS more mainstream in devices such as smartphones. GPS also operates independently which made it accessible by anyone and gave it the ability to work freely with other GPS receivers.

    Today, it provides civil, military, and commercial users around the world with crucial information like speed, elevation, and geolocation. The system has revolutionized today’s technology by becoming more interactive, effective, and useful in multiple industries. Our project on this system will explore the basic principles of the GPS, the various hardware that makes it work, and explore, in-depth, the operation of the system. Including theoretical calculations for positioning, speed, bearing, and distance to destination.

    Meaning of global positioning system (GPS);

    They may exist a satellite-based positioning system that owns by our government and by the Air Force people; the basic technique of GPS is to quantity ranges between the receiver and at identical time ascertain determined satellites. Positions of the satellites area unit forecasted and broadcasted aboard GPS signals to the user. Through numerous known positions (of the satellites) and thus before measured distances between receiver and satellites, the position of the receiver area unit exists usually determined. The distinction in position, which can stand additionally determined by time, is then the speed of the receiver.

    10 Pros or Benefits or Advantages of Global Positioning System (GPS);

    The following are the advantages of GPS:

    1. The GPS signal is available worldwide. Therefore, users will not deprived of it anywhere. This is the only navigating system in water as in larger water bodies we tend to area unit usually misled due to lack of correct directions.
    2. GPS can used anywhere in the world, it power by world satellites; so it can accessed anywhere, a solid tracking system and a GPS receiver are all you need.
    3. It is extraordinarily straightforward to navigate as a result it tells you the direction for everyone’s turns you’re taking otherwise you would like to fancy reach your destination.
    4. They works altogether weather thus you’d prefer to not worry concerning climate as in alternative navigating devices.
    5. The GPS gets calibrated on its own and hence it is easy to used by anyone. They usually used anyplace around the globe, it’s high-powered by world satellites; thus it’s usually accessed anyplace, with a solid following system and a GPS receiver area unit all you’d like.
    6. GPS prices you low as compared to alternative navigation systems. The most attraction of this system is its100% coverage on earth.
    7. It additionally helps you to see close restaurants, hotels, and gas stations and is extraordinarily helpful for a replacement place.
    8. Due to its low price, it’s straightforward to integrate into alternative technologies just like the phone.
    9. The system update frequently by our government and therefore is extraordinarily advanced.
    10. It provides users with information based on location in real-time. This is helpful in different applications such as mapping (used in cars), location (geocaching), analysis of performance (used in sports), etc. Example: Application for Google Earth.

    10 Cons or Drawbacks or Disadvantages of Global Positioning System (GPS);

    The Following are the disadvantages of GPS:

    1. The GPS chip is hungry for power and drains the battery in 8 to 12 hours. This requires replacement or recharge of the battery quite frequently.
    2. GPS does not penetrate solid walls or structures. It also affected by large constructions or structures. This means that users can not use GPS indoors or underwater or in dense tree regions or underground stores or places, etc.
    3. Sometimes GPS could fail due to sure reasons and in that case; you’d prefer to hold a backup map and directions.
    4. If you’re victimization GPS on A battery-operated device, there may even be A battery failure Associate in Nursingd you’ll like an external power offer that isn’t invariably doable.
    5. Sometimes GPS signals aren’t correct due to some obstacles to signals like buildings, trees, and typically by extreme atmospherical conditions like geomagnetic storms.
    6. The accuracy of GPS depends on sufficient signal quality received. The GPS signal affected by the atmosphere (i.e. multipath) Electromagnetic interference, ionosphere, etc. This results in an error in the GPS signal of about 5 to 10 meters. However, different receivers have different levels of accuracy.
    7. It relies entirely on receiving radio satellite signals, enabling EMP, nuclear weapons, radio interference, and failed satellites to affect its operation.
    8. GPS chip is hungry for power that drains the battery in eight to twelve hours; this wants replacement or recharge of battery quite oftentimes.
    9. GPS doesn’t penetrate solid walls or structures. it’s additionally full of massive constructions or structures.
    10. Another problem is that the position can occasionally be significantly in error, especially when the number of satellites is limited. Satellites use atomic clocks and are very precise, but sometimes there are discrepancies and therefore time measurement errors.

    Applications of Global Positioning System (GPS);

    As mentioned before, over the years, GPS technology has become more user-friendly, intuitive, and cheaper to operate. The receiver and other component prices have decreased over time, making the use of GPS more mainstream in devices such as smartphones. Furthermore, the independent operation of GPS accessible by anyone gave it the ability to work freely with other GPS receivers. Today, it provides civil, military, and commercial users around the world with crucial information like speed, elevation, and geolocation.

    The accuracy for GPS receivers used in civilian handheld receivers is usually around ± 5 meters. However, more highly advanced GPS receivers that are also costlier provide positions accurate to ±1cm; These receivers have revolutionized lots of industries, where highly accurate positioning stands used for so many different tasks

    Aviation;

    The role of GPS in aviation is one of the most important ones. It not only helps with real-time navigation but also provides the aircraft with a host of other information including speed and elevation. Furthermore, GPS enables the airline operations center to select the safest, fastest, and most fuel-efficient routes to the destination and also enables them to track if the flight is on course to the pre-determined route.

    Marine;

    Captains use high-accuracy GPS to navigate their vessels through the vast oceans, unfamiliar harbors, and canals. This also prevents them from running aground or hitting obstacles. Similarly, like in all the other industries, GPS also assists in the planning of the route helping captains and navigation controllers to map the safest, fastest, and most cost-efficient route.

    Farming;

    GPS receivers in farming help the farmers to map their fields and plantations. It ensures that the seeds aren’t replanted in the same areas and helps them return to the same position on the field to plant in the future. It also helps farmers keep farming under conditions of low visibility such as fog and darkness; as each piece of machinery stands guided by its GPS position rather than visual references. Additionally, mapping soil sample locations which allows farmers to keep track of the most fertile areas exists done by using high accuracy GPS.

    Science;

    Scientists use GPS technology to conduct a large variety of experiments and analyses, ranging from biology to physics to earth sciences. GPS collars or “tags” can now exist fitted on animals that repeatedly record the animal’s whereabouts and communicates the data via the satellite system back to the researchers. This provides them with additional elaborated data concerning the animal’s movements while not having to relocate specific animals.

    GPS technology exists also used by earth scientists to conduct a wide range of research on physical land features such as mountainy areas and along fault lines. GPS allows them to study not only the speed and direction of movement; but also help them to understand how landscapes change over time.

    Military;

    The GPS existed originally developed by the United States Department of Defense for use by the US military but existed later made available for public use. GPS in the military is now very essential. Many countries around the world like India and China are launching their GPS satellites to gain combat advantage. The systems allow the militaries to track their personnel, vehicles, and assets.

    Moreover, GPS is also crucial in missile technology to provide the warheads with tracking and guidance to various targets always of the day and in all weather conditions. Countries like the USA also use sophisticated high accuracy GPS to map out and plan; their asset layout across their field in a strategic way which is of huge strategic advantage.

    Market Share;

    The global GPS market exists expected to increase by 10.0% year–on–year during the forecast period. Global positioning system (GPS) technology has advanced its applications in many industries; and, new applications are existing developed due to its significant advantages.

    Explain;

    Few such applications such as determining location are relatively simple; whereas some exist complicated blends of GPS with communications and different technologies. In recent years, companies building GPS satellites and instrumentation have seen rapid growth in industrial and commercial GPS applications. It stands expected that technological advances in this sector will have a positive impact on the market in the following years.

    One of the main factors contributing to growth is the hyperbolic use of the technology among smartphone users. The market has jointly observed the progress of multifunctional GPS over the last few years. However, the lack of precision in GPS data presents a major challenge for the industry during the forecast period. The benefits of GPS coupled with its ability for wireless connectivity; and low power consumption stand also anticipated to drive market demand over the forecast period. However, factors like its high price of operation might hamper its market progress.

    Summary;

    The global positioning system is a satellite navigation system consisting of a minimum of 24 satellites. GPS operates 24 hours a day in any weather, anywhere in the world, without subscription fees or setup fees. The United States Defense Department initially placed the satellites in orbit for military use; but, in the 1980s they existed made available for civilian use. Over the past two decades, global positioning system (GPS) technology has existed rapidly developed and used for various applications in different industries.

    At present, the GPS still has limits to accurate measurement and the signal does not penetrate solid walls or structures. The application of GPS is however promising as navigation, survey, and information tool; because it can measure dynamic and static displacements in real-time; whereas the conventional monitoring system using other sensors such as accelerometers cannot measure static and quasi-static displacements. In addition, rapid advances in GPS devices and algorithms can mitigate erroneous GPS data sources; and integrated systems using GPS receivers with additional sensors can provide accurate measurements.

    How to define the Global Positioning System (GPS) Image
    How to define the Global Positioning System (GPS)?
  • Augmented Reality (AR) Definition Characteristics Essay

    Augmented Reality (AR) Definition Characteristics Essay

    Augmented Reality (AR) is a technology that connects the digital and material planets to make a virtual experience. Operating an instrument camera, digital content such as graphics, sound, and video, stands displayed on-screen to deliver augmented experiences. Unlike virtual reality, augmented reality isn’t a fully immersive, synthetic experience. Instead, it’s comprised of virtual elements placed in your direct surroundings. Apps for mobile or desktop that use augmented reality technology to mix digital features into the real environment.

    Here is the article to explain, Augmented Reality (AR) Meaning Definition Characteristics Types Essay!

    Augment Reality is the full name of the technology. For instance, AR technology can use to overlay score overlays on televised sports plays and to pop out 3D pictures, texts, and emails.

    What do you understand about Augmented Reality? Meaning and Definition;

    Augmented reality is a computer system that can combine the real world and computer-generated data. With this system, virtual objects stand blended into real footage in real-time. Thus, we can imagine the high potential that this technology might have if applied in the field of education. In augmented reality, the computer works as a mirror. With a camera and a black and white printed marker, we transmit to the computer the angle and coordinates about an object.

    Thus real elements stand mixed with virtual elements in real-time, and in the same way, as in a mirror, the image appears inverted on the screen; which makes orientation a very complicated task. Virtual models can exist animated and multiplied. With this technology, we can create and combine animated sequences to control a virtual object and share the interaction with others.

    In the field of education, we can use this technology to create interactive 3-D books that respond to changes in the angle of observation. From the beginning, the advertising companies were the first to use this system using interactive web-based augmented reality applications. Because of its potential, augmented reality will exist widely applied in fields; such as architecture, surgery, simulations, geology, and ecology among others.

    How does Augmented Reality (AR) work?

    The basic process of creation in augmented reality is to create virtual models that will exist stored in a database. After this, the model will stand retrieved from the mentioned database, rendered, and registered into the scene. Sometimes, this process implies serious difficulties in many area applications. The virtual content must exist stored on the database and also published as printed material, containing an index to our database. This communication to the database increases the complexity of the virtual model as final work.

    To avoid these difficulties is necessary to fully encode our virtual content in a bar code; which is not understandable to a human without using a specific augmented reality system. When captured by an AR system, the virtual models exist then extracted from the incoming image.

    Embedding —> Acquisition —> Extraction —> Registration —> Rendering

    The virtual model stands created and printed. This printed representation exists then acquired by the augmented reality device. After, the virtual models exist extracted from the acquired image. Finally, the virtual models stand registered onto the scene and rendered.

    Besides adding virtual objects into the real world, AR must be able to remove them. Desirable systems would be those that incorporate sound to broaden the augmented experience. These systems should integrate headsets equipped with microphones to capture incoming sound from the environment; thus having the ability to hide real environmental sounds by generating a masking signal.

    Features or Characteristics of Augmented Reality (AR);

    The following Augmented Reality Features or Characteristics below are;

    Haptic Technology;

    The main goal of AR is the interactivity between the user and virtual objects. HT is the system that allows the user to have tactile experiences within immersive environments. With this system, the user interacts with the virtual environment through an augmented system. To bring realism to these interactions, the system must allow the user to feel the touch of surfaces, textures, and the weight and size of virtual objects.

    With haptic devices, mass can exist assigned to virtual elements so that the weight and other qualities of the object can exist felt in the fingers. This system requires complex computing devices endowed with great power. Furthermore, the system must recognize the three-dimensional location of fiducial points in the real scene.

    Position-Based Augmented Reality;

    For correct compensation between the virtual and real image, the system must represent both images in the same frame of reference by using sensitive calibration and measurement systems to determine the different coordinate frames in the AR system. This system measures the position and orientation of the camera concerning the coordinate system of the real world. These two parameters determine the world-to-camera transform, C. We can quantify the parameters of camera-to-image, P, by calibrating the video camera. Finally, the third parameter, O, stands computed by measuring the position and orientation of the virtual object in the real world, existing rendered and combined with the live video.

    Computer Vision for Augmented Reality;

    Augmented Reality uses computer vision methods to improve performance. Thus, the system eliminates calibration errors by processing the live video data. Other systems invert the camera projection to obtain an approximation of the viewer pose. Recently, a mixed-method uses fiducial tracking; which stands combined with a magnetic position tracking system that determines the parameters of the cameras in the scene. Currently, the problems of camera calibration exist solved by registering the virtual objects over the live video.

    Animation;

    If we want an AR system to be credible, it must have the ability to animate the virtual elements within the scene. Thus, we can distinguish between objects moving by themselves and those whose movements exist produced by the user. These interactions exist represented in the object-to-world transform by multiplication with a translation matrix.

    Portability;

    Since the user can walk through large spaces, Augmented Reality should pay special attention to the portability of its systems, far from controlled environments, allowing users to walk outdoor with comfort. This stands accomplished by making the scene generator, the head-mounted display, and the tracking system capable of being autonomous.

    Types and Categories of Augmented Reality;

    There are several types of augmented reality in use today. From marketing to gaming, there are a lot of businesses in the exploration phase of utilizing this emerging technology. The question is… how? Easier asked than answered. To get a better understanding of how you can use AR, let’s walk through the different types and see examples of each.

    Marker-based;

    Marker-based AR uses markers to trigger an augmented experience. The markers, often made with distinct patterns like QR codes or other unique designs, act as anchors for the technology. When a marker in the physical world exists recognized by an augmented reality application, the digital content stands placed on top of it. Marker-based augmented reality stands commonly used for marketing and retail purposes. Think business cards that speak and brochures that move.

    In this example, marker-based AR is existing used for retail purposes in someone’s home. Imagine if you could see what your new bathroom vanity would look like before you buy it. Plus, with this application, you can swipe through the various sink options to see what looks best in the space.

    Markerless;

    Marker-less AR is more versatile than marker-based AR as it allows the user to decide where to put the virtual object. You can try different styles and locations completely digitally, without having to move anything in your surroundings. Markerless augmented reality relies on the device’s hardware, including the camera, GPS, digital compass, and accelerometer, to gather the information necessary for the AR software to do its job.

    In this example, the virtual car can stand positioned anywhere, regardless of the surrounding area. You can customize the Mustang itself, adjust and rotate the view, and learn additional product information. The following types of augmented reality technically fall under the umbrella of markerless AR in that they don’t need a physical marker to trigger the digital content.

    Location-based;

    Location-based AR ties digital content and the experience it creates to a specific place. The objects exist mapped out so that when a user’s location matches the predetermined spot it exists displayed on the screen. The game that brought augmented reality to the masses, Pokemon Go, is an example of location-based AR. The experience brings virtual Pokemon to our world through your smartphone and users exist encouraged to find as many of the characters as possible.

    Superimposition;

    Superimposition AR recognizes an object in the physical world and enhances it in some way to provide an alternate view. This can include recreating a portion of the object or the whole thing in its entirety. In this example, the chair stands copied, rotated, and placed in another location around the table. The user can do so many things with this technology, like decide if they want to have four chairs and a little elbow room or if they can comfortably seat six at the same table.

    Projection-based;

    Projection-based AR is a little different than the other types of markerless augmented reality. Namely, you don’t need a mobile device to display the content. Instead, light projects the digital graphics onto an object or surface to create an interactive experience for the user. Yes, that’s right, holograms! Projection-based AR stands used to create 3D objects that can interact with the user. It can exist used to show a prototype or mockup of a new product, even disassembling each part to better show its inner workings.

    Outlining;

    Outlining AR recognizes boundaries and lines to help in situations when the human eye can’t. Also, Outlining augmented reality uses object recognition to understand a user’s immediate surroundings. Think about driving in low light conditions or seeing the structure of a building from the outside. This example of outlining AR tells the driver exactly where the middle of the lane is to keep them out of harm’s way. Similar applications include parking your car and having the boundaries outlined so that you can see exactly where the parking space is.

    What does Augmented Reality for Education?

    The use of Augmented Reality in school promotes teamwork and allows viewing of three-dimensional models to students; which facilitates the task of learning through a fun and interactive process. Likewise, this system can exist applied to a wide variety of learning areas outside the educational field. Among the reasons that make AR attractive to exist applied in educational centers, we find, among others, the interaction between virtual and real environments; the easy manipulation of objects within the virtual environment, and the ease of movement from one space to another in real-time.

    Through the use of HMDs, AR promotes team communication, showing the possible gestures and other communication signals from the students of the group. All this information view by students on their screens, which facilitates interpersonal communication. This allows this form of collaboration to exist seen more like face-to-face communication than isolated communication through displays on the HMD screen.

    In these collaborative environments, the information taken from the real world is socially shared in the virtual space. The advantage of using AR systems instead of other technologies is that results are highly intuitive for people; who have no experience with other computer systems. Thus, even the youngest students can enjoy a fun interactive experience.

    Fantasy Interfaces;

    Little children often fantasize about being actors in a fairy tale. With AR, we can make this fantasy a reality, by using a book with markers that acts as the primary interface. Thus, we can turn the pages, read the text, and we can see also three-dimensional animations that tell us the story better. These 3D models are embedded in the page of the book so the child can see the animations from any point of view, moving them from different angles. These animations can be adapted to any size of the book so that reading becomes a very fun and immersive experience.

    These systems can be used at any educational level, making the learning process a very engaging task. To apply this system successfully, educators should collaborate with the developers of these applications to find the best way to apply it in school environments.

    Future directions;

    Future monitoring systems will be more robust and will incorporate mixed media to remedy the mistakes of registration. These systems will fully reproduce the scenes in real-time within the HMD. Moreover, future AR systems will offer users the ability to walk in great outdoor spaces.

    To achieve this, these systems will have to evolve towards better portability. To a greater sense of immersion, these systems should also incorporate 3D sound systems. As for the political and social dimensions, through the gradual introduction of Augmented Reality in the daily tasks of our lives, it will be more accepted by people. Gradually, we will see that this system allows the users to make; their work easier and faster instead of being seen as a system that replaces human workers.

    Conclusion;

    Augmented Reality is less technologically-advanced than Virtual Reality Systems, but by contrast, AR is much more commercial. Nowadays, AR can exist found in research laboratories and academic centers. The next development of AR will be initially on aircraft manufacturing. On the other hand; its introduction to the medical field will take longer than in other areas. AR will probably be used in medical training before surgery.

    Another area where AR will develop strongly in the coming years will be in tours through outdoor environments by wearing a Head-mounted display, facilitating the development of advanced navigation systems and visualizations of past and future environments. These systems will make the orientation a much easier task. AR systems will also include 3D maps displaying information about the elements we´re looking at; and, their dimensions and will show the easiest way to reach that destination.

    Regarding the application of AR in education, the lesson will be better understood by visualizations of history, geography, anatomy, and sciences in general that will make the learning process much easier. After solving the basic problems of Augmented Reality, advanced virtual elements will be developed that will be perceived as realistic as the real world. To achieve this purpose, the conditions of lighting, texturing, shading, and registration will be almost perfect; so we will wear a pair of glasses outdoors that will show us realistic virtual elements with which we will interact normally.

    Augmented Reality (AR) Meaning Definition Characteristics Types Essay Image
    Augmented Reality (AR) Meaning Definition Characteristics Types Essay!
  • Database Management System (DBMS) History

    Database Management System (DBMS) History

    What is the History of Database Management System (DBMS)? It is a computer software program that stands designed as the means of managing all databases that exist currently installed on a system hard drive or network. Different types of database management systems exist, with some of them designed for the oversight and proper control of databases that exist configured for specific purposes. Here are some examples of the various incarnations of DBMS technology that are currently in use, and some of the basic elements that are part of DBMS software applications. Data is a collection of facts and figures. The data collection was increasing day today and also they needed to store it in a device or software which is safer.

    Here is the article to explain, Database Management System (DBMS) Introduction and their History!

    What is DBMS? Database Management System (DBMS) is software for storing and retrieving users’ data while considering appropriate security measures. It consists of a group of programs that manipulate the database. The DBMS accepts the request for data from an application and instructs the operating system to provide the specific data. In large systems, a DBMS helps users and other third-party software to store and retrieve data. Also, DBMS allows users to create their databases as per their requirements. The term “DBMS” includes the use of the database and other application programs. It provides an interface between the data and the software application.

    Introduction to Database Management System (DBMS);

    A Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and use of a database. It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. Also, DBMS is a system software package that helps the use of the integrated collection of data records and files known as databases. It allows different user application programs to easily access the same database. DBMSs may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way.

    Instead of having to write computer programs to extract information, users can ask simple questions in a query language. Thus, many DBMS packages provide Fourth-generation programming language (4GLs) and other application development features. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency, and restoring the database from backups. A DBMS also provides the ability to logically present database information to users.

    History of Database Management System (DBMS);

    Here, are the important landmarks from history:

    • 1960 – Charles Bachman designed the first DBMS system
    • 1970 – Codd introduced IBM’S Information Management System (IMS)
    • 1976 – Peter Chen coined and defined the Entity-relationship model also known as the ER model
    • 1980 – Relational Model becomes a widely accepted database component
    • 1985 – Object-oriented DBMS develops.
    • 1990 – Incorporation of object-orientation in relational DBMS.
    • 1991 – Microsoft ships MS access, a personal DBMS and that displaces all other personal DBMS products.
    • 1995 – First Internet database applications
    • 1997 – XML applied to database processing. Many vendors begin to integrate XML into DBMS products.
    Charles Bachman;

    Charles Bachman was the first person to develop the Integrated Data Store (IDS); which existed based on a network data model for which he existed inaugurated with the Turing Award (The most prestigious award; which is equivalent to the Nobel prize in the field of Computer Science.). It existed developed in the early 1960s. In the late 1960s, IBM (International Business Machines Corporation) developed the Integrated Management Systems; which is the standard database system used to date in many places. Also, It stood developed based on the hierarchical database model.

    It was during the year 1970 that the relational database model existed developed by Edgar Codd. Also, Many of the database models we use today are relationally based. It stood considered the standardized database model from then. The relational model was still in use by many people in the market. Later during the same decade (1980’s), IBM developed the Structured Query Language (SQL) as a part of the R project. It stood declared as a standard language for the queries by ISO and ANSI.

    The Transaction Management Systems for processing transactions stood also developed by James Gray for which he has felicitated the Turing Award. Further, there were many other models with rich features like complex queries, datatypes to insert images, and many others. The Internet Age has perhaps influenced the data models much more. Data models stood developed using object-oriented programming features, embedding with scripting languages like HyperText Markup Language (HTML) for queries. With humongous data being available online, DBMS is gaining more significance day by day.

    More History of DBMS;

    Databases have been in use since the earliest days of electronic computing. Unlike modern systems which can be applied to widely different databases and needs, the vast majority of older systems were tightly linked to the custom; databases to gain speed at the expense of flexibility. Originally DBMSs were found only in large organizations with the computer hardware needed to support large data sets. Some types of DBMS are:

    1960s Navigational DBMS;

    As computers grew in speed and capability, several general-purpose database systems emerged; by the mid-1960s there were several such systems in commercial use. Interest in a standard began to grow, and Charles Bachman, the author of one such product, Integrated Data Store (IDS), founded the “Database Task Group” within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971 they delivered their standard, which generally became known as the “Codasyl approach”, and soon there were several commercial products based on it available.

    1970s Relational DBMS;

    Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that were primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the Codasyl approach, notably the lack of a “search” facility. In 1970, he wrote several papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.

    In this paper, he described a new system for storing and also working with large databases. Instead of records being stored in some sort of linked list of free-form records as in Codasyl, Codd’s idea was to use a “table” of fixed-length records. A linked-list system would be very inefficient when storing “sparse” databases where some of the data for anyone’s record could be left empty. The relational model solved this by splitting the data into a series of normalized tables, with optional elements being moved out of the main table to where they would take up room only if needed.

    Some differences between DBMSs;

    SQL (Structured query language) is a database computer language designed for managing data in relational database management systems (RDBMS) and originally based upon relational algebra. Its scope includes data insert, query, update and delete, schema creation and modification, and data access control. SQL was one of the first languages for Edgar F. Codd’s relational model in his influential 1970 paper, “A Relational Model of Data for Large Shared Data Banks” and became the most widely used language for relational databases.

    PHP (hypertext Preprocessor) provides a range of facilities to allow web database developers to retrieve data from a database and merge this dynamic content with static content on a web page. It includes the actual database(where the data are stored)and the DBMS, which manages all the access to the database, the application server manages communication with the database server with the DBMS API.

    Oracle DBMS;

    Oracle DBMS Oracle database system†”identified by an alphanumeric system identifier or SID †”comprises at least one instance of the application, along with data storage. An instance†”identified persistently by an instantiation number comprises a set of operating-system processes and memory structures that interact with the storage. In addition to storage, the database consists of online redo logs (or logs), which hold transactional history. Processes can in turn archive the online redo logs into archive logs (offline redo logs), which provide the basis (if necessary) for data recovery and some forms of data replication.

    The Oracle DBMS can store and execute stored procedures and functions within itself. PL/SQL (Oracle Corporation’s proprietary procedural extension to SQL), or the object-oriented language Java can invoke such code objects and/or provide the programming structures for writing them. Also, DBMS stands for Database Management System which is a general term for a set of software dedicated to controlling the storage of data.

    RDMBS stands for Relational Database Management System. This is the most common form of DBMS. Invented by E.F. Codd, the only way to view the data is as a set of tables. Because there can be relationships between the tables, people often assume that is what the word “relational” means. Not so. Codd was a mathematician and the word “relational” is a mathematical term from the science of set theory. It means, roughly, “based on tables.

    Database Management System (DBMS) History Image
    Database Management System (DBMS) History
  • RDBMS stands for Meaning and Definition

    RDBMS stands for Meaning and Definition

    What is the full form with Meaning and Definition of the RDBMS stands for it? The abbreviation form of Relational Database Management System is RDBMS. The structure of RDBMS is database tables, fields, and records. Each RDBMS table consists of database table rows; and, each database table row consists of one or more database table fields. RDBMS stands used by the mainframe, midrange, and microcomputers. MS SQL Server, DB2, Oracle, and also MySQL are the most popular RDBMS.

    Here is the article to explain, What is the RDBMS stands for Meaning and Definition?

    RDBMS stands for Relational Database Management Systems. It is a program that allows us to create, delete, and update a relational database. A relational database is a database system that stores; and retrieves data in a tabular format organized in the form of rows and columns. It is a smaller subset of DBMS which existed designed by E.F Codd in the 1970s. The major DBMS like SQL, My-SQL, ORACLE is all based on the principles of relational DBMS. The data exist stored in the form of tables that might exist related to common fields. The data stored in the database table exist manipulated by the rational operators given by RDBMS stands for it. SQL is the database query language in most RDBMS.

    Meaning and Definition of Relational Database Management System (RDBMS);

    A Relational Database Management System (RDBMS full form) is a database system that makes available access to a relational database. The database system is a collected work of database function that helps the end-user to build, keep up, supervise and utilize the database. A “relational database” RDBMS stands for a database controlled based on the “relational” data model. Data exist stored up and accessible in a tabular system; structure in the combination of rows and also columns containing one documentation per row.

    Why RDBMS?

    We will use the terms tables and relations interchangeably.

    • In an RDBMS, the data is logically perceived as tables.
    • Tables are in logical data structures that we assume to hold the data that the database intends to represent.
    • Also, tables are not physical structures.
    • Each table has a unique table name.

    Relational package owes its foundation to the fact that the values of every table area unit exist associated with others; it’s the potential to handle larger magnitudes of knowledge and also simulate queries simply.

    The features of RDBMS;

    Relational Database Management Systems maintains knowledge integrity by simulating the subsequent features:

    • Entity Integrity: No 2 records of the information table will be utterly duplicated.
    • Referential Integrity: solely the rows of these tables will be deleted that aren’t employed by different tables. Otherwise, it should cause knowledge inconsistency.
    • User-defined Integrity: Rules outlined by the user’s supported confidentiality and access.
    • Domain integrity: The columns of the information tables area unit encircled at intervals some structured limits, supported default values, style of knowledge, or ranges.

    The Characteristics of RDBMS;

    • Data should be kept in tabular type within the decibel file, that is, it ought to be organized within a variety of rows and columns.
    • Each row of the table is named record/tuple. the gathering of such records is understood because the cardinality of the table
    • Each column of the table is named Associate in Nursing attribute/field. the gathering of such columns is named the arity of the table.
    • No 2 records of the decibel table will be identical. knowledge duplicity is so avoided by employing a candidate key. Candidate secret’s a minimum set of attributes needed to spot every record unambiguously.
    • Tables area units associated with one another with the assistance of foreign keys.
    • Database tables additionally enable NULL values, that’s if the values of any of the weather of the table aren’t crammed or area unit missing, it becomes a NULL worth, that isn’t adored zero.

    Advantages of RDBMS;

    The following the advantages of RDBMS below are;

    Data Structure;

    The tabular presentation is trouble-free and also uncomplicated for database end-users to identify with and make use of data in the database. Also, RDBMSs make available data right to use using a natural structure and association of the data. The different queries of the Database to search any column within the database in a given condition or matching criteria are possible.

    Multi-User Access;

    RDBMSs agree to numerous databases’ user rights to use the same database at the same time. Integrated locking and also communication management functionality permit end-users the right to use the database; as it is existing transformed, put off collision stuck between two users modifying the data, and maintain users from processing to some extent restructured records.

    Privileges;

    Access control and privilege control characteristics in an RDBMS agreed to the database administrator to limit access to allowed users, and also allow the right to individual users according to the types of database activities they require to carry out. The authentication process can exist set up based on the address called remote client IP address, in grouping with end-user permission, limited use, and also modification to precise peripheral computer systems.

    Network Access;

    RDBMSs allow access to the main database through a server daemon. The server daemon is a dedicated software program that responds to user requests on a network; and, allows database end-user to connect to and also access the database. All the users do not have to be capable to log in to the user computer system to access the database; providing ease for the end-users and a level of protection for the main database. The access to the network permits system developers to build desktop outfits; and, Web applications to communicate and also transact with databases.

    Speed;

    The relational database model is not the best database system. One of the advantages of RDBMS, as like simplicity, makes the slower swiftness a reasonable transaction. The optimizations put together into the RDBMS, and the aim of the databases, boost performance, permitting RDBMSs to carry out more than fast enough for most of the applications and data set. Technological improvement, evaluation, and development of processor speeds and also diminishing memory and storage expenditure permit database systems administrators to prepare extraordinarily faster systems that can triumph over any database act inadequacy.

    Maintenance;

    The characteristic of RDBMSs maintenance utilities is that allow the administrator of the database to use easily maintainable tools, test, repair, and also make back up the databases stored in the system. The functions can exist made automated by using built-in incorporated automation technology possessed in the RDBMS, or the automation implements existing on the operating system.

    Language;

    The powerful genetic language calls “Structured Query Language” (SQL) exists supported by the RDBMS. The syntax of SQL is easy to understand and use, and the language uses the Standard English language as keywords and also the manner of speaking, making it moderately spontaneous and easy to learn and use. Some of the RDBMSs use non-SQL, specific database special words, functions, and characteristics to the Structured Query Language.

    Disadvantages of RDBMS;

    The following the disadvantages of RDBMS below are;

    In RDBMS, database Normalisation (Database Normal Forms and Functional Dependencies) may lead to some relationship that has no existence in the database or corresponds to entities in the practical database. This makes confliction on the use of ‘join’ method in query dealing out.

    • High price and in-depth Hardware and code Support: immense prices and also setups area unit needed to create these systems perform.
    • Scalability: just in case of the addition of a lot of knowledge, servers at the side of extra power, and memory area unit needed.
    • Complexity: Voluminous knowledge creates complexness within the understanding of relations and will lower the performance.
    • Structured Limits: The fields or columns of an electronic information service system area unit encircled at intervals varied limits; which can cause loss of knowledge.
    • In RDBMS, the “Many to Many” relationships between entities complicated to put across.
    • Also, The RDBMS has dependencies like domains, keys, multi-valued, and join dependency.
    RDBMS stands for Meaning and Definition Image
    RDBMS stands for Meaning and Definition; Image by Sven W. from Pixabay.
  • Facilities Manager Meaning Role Responsibilities Essay

    Facilities Manager Meaning Role Responsibilities Essay

    Facilities Manager Meaning, Definition, Role, and Responsibilities with their Essay; Facility management includes all complex operating activities such as grocery stores, auto shops, sports complexes, jails, office buildings, hospitals, hotels, and all other revenue-generating. The facility manager’s job purpose is to create an environment that encourages output, is pleasing to clients and consumers, and is efficient.

    Here is the article to explain, What is the Meaning and Definition of Facilities Manager? Role and Responsibilities with their Essay!

    The meaning and definition of facilities manager includes a wide range of functions and also support services. All staff, students, and volunteers are responsible for ensuring that they work in a manner that is safe for themselves; and others and to comply with relevant requirements of guidance to the national standees and the University of health and safety department. All staff parents or cares volunteers and students exist urged to read the nursery health and safety policy and relevant parts in the university health and safety policy.

    Meaning and Definition of Facility management;

    Facility management is a profession that encompasses multiple disciplines to ensure the functionality of the built environment by integrating people, place, process, and technology; The integration of processes within an organization to maintain and also develop the agreed services; which support and improve the effectiveness of its primary activities.

    According to Alan M.Levitt,

    “a facility may be a space or an office or suite of offices; a floor or group of floors within a building; a single building or a group of buildings or structures. These structures may be in an urban setting or freestanding in a suburban or rural setting. The structures or buildings may be a part of a complex or office park or campus”.

    Facility management is hard to define because of its broad scope. Also, It involves the coordination of everything that keeps a company’s buildings, assets, and systems running. On top of managing day-to-day operations, above the facilities manager meaning and definition must also execute the long-term strategic facility management plan of their company.

    Total facilities management;

    Total facilities management includes those things which everything needed; such as services for a living, working, healthcare, education, commercial development, retailing, transportation, and communication undertakings.

    According to steven M.Price;

    “facilities, professionals are being asked to contain costs while achieving maximum beneficial use- that is, to achieve more with less.”

    Some other people describe the facility as a physical place where done business activities. Also, Facility management is a duty to make plans according to business activity needs and demands; as good facility management deals with those needs in the best and most effective ways possible. Which responsibility played by facility managers explain below:

    • Observe the efficiency of the organization.
    • Make sure that the divergent processes, procedures, and standards present in a business complement rather than interfere with one another.
    • Observe all features of facility maintenance.
    • Tracking and responding to environmental, health, safety, and security issues.
    • Ensuring facility compliance with relevant regulatory codes and regulations
    • Educating the workforce about all manner of standards and procedures, from ordering office supplies to acting in the event of a disaster.

    The role and responsibilities of facilities managers;

    A facilities manager has a range of responsibilities including overseeing the daily running of a building and reducing its operating costs. In any organization, the facility manager is responsible for services of management that support business. Also, Facilities managers manage the continual maintenance of the building, identifying health and safety issues to make sure the building is safe for use and general responsibility for utilities, services, and daily logistical management. How to define the meaning and definition of facilities manager? A facilities manager is also responsible for managing catering and cleaning services and utilizing space management throughout the building.

    • Facility managers are responsible for directing a maintenance staff.
    • Facility manager’s duties related to standard maintenance, mailroom, and security activities, he or she may also be responsible for providing engineering and architectural services, hiring subcontractors, maintaining computer and telecommunications systems, and even buying, selling, or leasing real estate or office space.
    • The managers are also responsible for considering federal, state, and local regulations.
    • Facility managers also integrate knowledge workers into a dynamic business environment of global competition, technological developments, security threats, and changing values.

    Scope of facilities management;

    Facilities management describe those core business activities where business are working and also provide a good career path with the associated motivation that it brings. Good facilities management always try to introduce new idea and knowledge to improve the standard, improve the consumer primary activities and protect the associated investments. Those by the scope of facilities management is wide and varied; such activities include security, cleaning, maintenance, catering, landscaping, hygiene, etc. Today the role and scope of facilities management have changed dramatically

    Corporate social responsibility;

    Total corporate social responsibility can subdivide into four primary criteria-economic, legal, ethical and discretionary responsibilities. Mark S.Schwartz and Archie B.Carroll, “Corporate Social Responsibility: A Three Domain Approach,” Business Ethics Quarterly 13, no.4 (2003), 503-530; and Archie B.Carroll, “A Three -Dimensional Conceptual Model of Corporate Performance,” Academy of Management Review 4(1979), 497-505.

    These four criteria fit together to form the whole of a company’s social responsiveness. Managers and organizations involve in several issues at the same time, and a company’s ethical and discretionary responsibilities exist increasingly considered as important as an economic and legal issue.

    Business ethics are moral principles that guide the way a business behaves. Acting ethically involves distinguishing ‘right’ and wrong and then making the right choice. For example, the policy about honesty, health and safety, and corrupt practices.

    Supports;

    Facilities management also supports the board to bring aspects critical to the facility management operational activities such as premises, local community, and staff welfare. Also, Facilities managers play a vital role in the delivery of more facilities by several stages in the life cycle of a building.

    Today, facilities management challenges are integrating the resource with the user’s needs. Lavy (2008) concludes that facility management not only improves physical performance; but also increases the satisfaction that the users feel while staying/working/teaching/learning in that building. The facilities manager needs to understand the link between the institution’s aims and objectives; and the various group in the institution. The interface has to be strong and without it is easy to fail to work in the same direction. Therefore, a facility manager has to take into account the needs of the users as a basis for providing them with suitable facilities.

    Ever-growing space requirements with ever-growing unused spaces increase the gap between what is available and what exists required. Also, Facility Managers face several challenges in convincing the higher management in getting approval for an additional building or space.

    Health and safety;

    Bio-energy company management system always keeps in mind the development of positive health, safety, and environment culture through the development of policies and procedures and promotion. They also provide training and monitoring services to the employees and employers; which exist intended to encourage employees as an integral part of daily operations. All staff, students, visitors, parents/carers would report any health and safety issues promptly to Melissa Leach or Susan Rogers or a senior member of staff in their absence.

    Health and Safety issues would discuss and record and the relevant agencies would inform of the concern that has occurred. The Nursery Manager and Deputy Manager also attend the Level 2 Award in Health and Safety in the workplace, Risk Assessment Training, and Manual Handling Risk Assessment. Records of training undertaken by staff stand kept by the Nursery Manager along with planned dates for future course attendance and also refresher courses as needed.

    Safety and Security Policy;

    At Phoenix, we aim to make the nursery a safe and secure place for the children, Parents/Carers, Staff, and any Visitors who may enter the setting. We aim to make all the children, parents/carers, and staff aware of health and safety issues to minimize the hazards and risks to enable them to thrive in a safe and healthy environment.

    Melissa and Sue are the members of staff who have undertaken the appropriate training and are responsible for recording risk assessments, updating policies, and ensuring others are aware of safety and security issues.

    Health and safety policies;
    • As a management priority health and safety as an integral part of business
    • Carried out all activities safe manner.
    • Find hazards and mitigated through formal assessment.
    • Organizations fulfill with current health and safety legislation and apply best practices to all their activities.
    • Also, Employees encourage to be proactive on health and safety issues.
    • All employees require to co-operate with the organization; and their workers in implementing the policy and make sure that their work is without risk to themselves.
    Environment policy;
    • Improvement in the environmental management system by worker training, consultation, involvement in identifying environmental impacts, etc is the objective of the organization.
    • The environmental impact also analyzed in under organization which involves potential risk of pollution,
    • Organizations always try to cooperate with the applicable local authority and landlords site on a relevant issue.
    • Also, The Company gives due consideration to environmental issues raised by customers and seeks to respond positively to customer-led environmental initiatives.
    • The Company works closely with those involved in the manufacturing supply chain; to achieve best practices in the environmental aspects of material sourcing, product manufacture, disposal, and recycling.

    All staff, students, visitors, volunteers, Parents/Carers are aware of the location of fire doors and fire exits, and means of escape from the nursery. Also to know the location of the nearest fire extinguisher and fire alarm call points and instructions for their use. All staff has attended the University in house Fire Warden Training. Emergency exit routes are always tidy and free from obstacles. Also, The Fire Siren tested weekly. The Nursery Manager or Deputy Manager to collect the register from the Kitchen. Staff to take responsibility for the children, and to assist them to immediately vacate the nursery, through the safest exit; if possible through the garden and car park.

    Risk Assessment;

    The majority of the activities that exist carried out in the Nursery are generally of low risk in nature and do not require standing formally assessed. However, if we are planning a trip outside the nursery or are carrying out an activity; when the child could be at risk, we would carry out a written risk assessment. Risk assessments exist carried out by Sue Rogers and Melissa Leach and all staff will contribute to these documents.

    The risk assessments would exist carried out on activities, the nursery environment, outside environment, manual handling, and outings. They exist regularly reviewed, working documents stand displayed in each area of the nursery. Should you have any queries or concerns of your own please feel free to talk to Sue or Melissa. Risk assessments exist brought to the attention of all relevant staff and students parents/carers and anyone who involve in the activity. Risk assessments exist reviewed annually. They exist periodically passed to the Health and Safety Department for checking to ensure that they are suitable and sufficient.

    Importance of quality to facilities management;

    As professional facility management used to strategically provide a quality working environment. But it required top-level management support and accurate requirements defined by consumers. In today’s current environment of innovation and increasing completion among suppliers, facilities management service providers must implement quality management.

    The organization gets success through introducing quality management techniques. That’s by productivity can improve and absenteeism reduced by improving the internal environment. According to Alexander, “it is a total quality approach to sustaining an operational environment and providing support services to meet the strategic needs of an organization”.

    Facilities Manager Meaning Role Responsibilities Essay Image
    Facilities Manager Meaning Role Responsibilities Essay; Image by Photo Mix from Pixabay.
  • Network Intrusion Detection Systems (NIDS) Comparison Essay

    Network Intrusion Detection Systems (NIDS) Comparison Essay

    The network intrusion detection systems (NIDS) network security technology monitors network traffic for suspicious activity; and, issues alerts when action is required to deal with the threat. Any malicious activity is reported and can be collected centrally by using the security information and event management (SIEM) method.

    Here is the article to explain, Essay, and Comparison of Network Intrusion Detection Systems (NIDS)!

    Security information and event management (SIEM) software give enterprise security professionals both insight into; and a track record of the activities within their IT environment. The SIEM method incorporates outputs from multiple sources and employs alarm filtering techniques to identify malicious actions. There are two types of systems, host-based intrusion, and network intrusion detection. In this essay, I will be looking at both techniques, identifying what classifies as a NID and comparing different types of NIDS.

    Classification of Network Intrusion Detection Systems (NIDS);

    As previously highlighted in the introductory part of the essay; there are two types of systems, host-based intrusion, and network intrusion detection. They are known as HIDS or NIDS. They are different from each other as host-based intrusion monitors malicious activities on a single computer; whereas network intrusion detection monitors traffic on the network to detect intrusions. The main difference between both systems is that network intrusion detection systems monitor in real-time; tracking live data for tampering whilst host-based intrusion systems check logged files for any malicious activity. Both systems can employ a strategy known as signature-based detection or anomaly-based detection.

    Anomaly-based detection searches for unusual or irregular activity caused by users or processes. For instance, if the network was accessed with the same login credentials from several different cities around the globe all in the same day; it could be a sign of anomalous behavior. A HIDS uses anomaly-based detection surveys log files for indications of unexpected behavior; while a NIDS monitors for the anomalies in real-time.

    Signature-based detection monitors data for patterns. HIDS running signature-based detection work similarly to anti-virus applications; which search for bit patterns or keywords within files by performing similar scans on log files. Signature-based NIDS work like a firewall, except the firewall, performs scans on keywords, packet types, and protocol activity entering and leaving the network. They also run similar scans on traffic moving within the network.

    Comparison of different types of Network Intrusion Detection Systems (NIDS);

    There are various types of NIDS available to protect the network from external threats. In this essay, we have discussed both HIDS (Host-based) and NIDS (Network Intrusion Detection System) and signature-based IDS and anomaly-based IDS. Both of them are very similar but they function differently but when combined, they complement each other.

    For example, HIDS only examines host-based actions such as what are being applications used, kernel logs, files that are being accessed, and information that resides in the kernel logs. NIDS analyzes network traffic for suspicious activity. NIDS can detect an attacker before they begin an unauthorized breach of the system; whereas HIDS cannot detect that anything is wrong until the attacker has breached the system.

    Both signature-based IDS and anomaly-based IDS contrast each other. For example, anomaly-based IDS monitor activities on the network and raise an alarm; if anything suspicious i.e. other than the normal behavior detected.

    There are many flaws with anomaly-based IDS. Both Carter (2002) and Garcia-Teodoro (2009) have listed disadvantages

    • Appropriate training required before the IDS installed into any environment
    • It generates false positives
    • If the suspicious activity is similar to the normal activity, it will not detected.

    However, there are flaws with signature-based IDS. Carter (2002) highlights some disadvantages of signature-based IDS.

    • It cannot detect zero-day attacks
    • The database must updated daily
    • The system must updated with every possible attack signature
    • If an attack in a database is slightly modifies, it is harder to detect

    Advances and developments of Network Intrusion Detection Systems (NIDS);

    There have been many advances and developments towards NID over the last few years such as honeypots and machine learning. Spitzner defines honeypots as computer systems that exist designed to lure & deceive attackers by simulating a real network. Whilst these systems seem real, they have no production value. Any interaction with these systems should be illicit. There are many kinds of honeypots such as low interaction systems to high interaction and more complex systems to lure and attract advanced attackers.

    For example, high interaction honeypots provide attackers with a real operating system that allows the attacker to execute commands. The chances of collecting large amounts of information on the attacker are very high as all actions exist logged and monitored. Many researchers and organizations use research honeypots; which gather information on the attacker and what tools they used to execute the attack. They exist deployed mainly for research purposes to learn how to provide improved protection against attackers.

    Other Things;

    Another advancement of Network Intrusion Detection is machine learning. Machine learning provides computers with the capability of learning and improving from events without existing programs explicitly. The main aim of machine learning is to allow computers to learn without human intervention and intervene accordingly.

    Unsupervised learning algorithms exist used when the information provided for training exists neither marked nor classified. The task given to the machine is to group unsorted information according to patterns, similarities, and differences without any training data given prior. Unsupervised learning algorithms can determine the typical pattern of the network and can report any anomalies without a labeled data set.

    One drawback of the algorithm is that it is prone to false-positive alarms; but, can still detect new types of intrusions. By switching to a supervised learning algorithm, the network can exist taught the difference between a normal packet and an attack packet. The supervised model can deal with attacks and recognize variations of the attack.

    Implementation of Network Intrusion Detection Systems (NIDS) within an SME;

    With threats developing every day, businesses need to adapt to the changing landscape of network security. For example, a business should focus on developing a strong security policy. This helps to define how employees use IT resources and define acceptable use and standards for company email. If a business creates a set of clear security policies and makes the organization aware of these policies; these policies will create the foundation of a secure network.

    Another suggestion provided in the report by SANS is to design a secure network with the implementation of a firewall, packet filtering on the router, and using a DMZ network for servers requiring access to the internet.

    More things;

    Testing of this implementation must exist done by someone other than the individual or organization that has configured the firewall and perimeter security. Developing a computer incident plan is key as it will help to understand how to respond to a security incident. The plan will help to identify the resources involved and recover and resolve the incident. If a business is reliant on the internet during day-to-day operations, a company will have to disable their resources, reset them and rebuild the systems for use again which will resolve the issue.

    Using personal firewalls on laptops is another suggestion for businesses to take into consideration. For example, laptop computers may exist used in the office and at other times, may exist connected to foreign networks which may have prominent security issues.

    For example, the Blaster worm virus which spread from August 11th, 2003 gained access to many company networks after a laptop existed infected with the worm from a foreign network, and then the user subsequently connected to the corporate LAN. The worm eventually spread itself across the entire company network.

    From the report, SANS identified that personal laptops should have personal firewalls enabled to address any prominent security issues. They also highlighted that laptops that contain sensitive data, encryption, and authentication will reduce the possibility of data existing exposed if the device is lost.

    Conclusion;

    From my findings, I believe that NIDS is essential in protecting a company’s network from external and internal threats. If a company chose not to implement a NID within the business, the subsequent impact would be the company would cease to exist if an attack damaged customer records or valuable data.

    With the implementation of a NID within a company, the business can mitigate the impacts of an attack by using a honeypot to capture information about an attacker and what tools they used to execute the attack. This allows businesses to prepare themselves against attacks and secure any assets that could damage the company’s ability to operate. By enforcing a security and fair use policy within the company, employees are aware of the standards they must abide by when employed by the business.

    This also allows the company to scrutinize employees that do not follow the practices and take legal action if necessary. A business can hire managed security service providers who can assist in implementing the appropriate security measures for the business. Businesses must check whether the company has qualified staff and proven experience of their work as the main threat of most attacks on small to medium businesses lies within the company.

    Network Intrusion Detection Systems (NIDS) Comparison Essay Image
    Network Intrusion Detection Systems (NIDS) Comparison Essay; Image by Pete Linforth from Pixabay.
  • Global Digital Divide Examples Essay

    Global Digital Divide Examples Essay

    The phrase global digital divide has become a global phenomenon and has taken the world by storm, with It an Essay and Examples. What exactly does this term mean and what does it entail? This phenomenon became current during the mid-1990s and define as the segregation between those; who have access to advanced forms of technology and those who do not have access to advanced forms of technology specifically between the developing and non-developing world.

    Here is the article to explain, The definition of Global Digital Divide with an Essay and Examples!

    The global digital divide is an ongoing debate that includes a variety of contributing factors that will discuss in this paper such as cultural, political, and economic issues specifically within the context of how two African nations South Africa and Mauritius are combating the global digital divide.

    Moreover, this paper will utilize the success story of Mauritius as a comparison of how once government institutions; and powers are actively involved within communities by providing subsidized internet access; the division caused by the global digital divide examples minimizes.

    Furthermore, this specific case study of Mauritius provides hope and ambition to other African states specifically in the context of South Africa; that if a community-supported with powerful institutions and federal resources combating the global digital divide examples is possible. Likewise, this paper will focus on both the quantitative and primarily the qualitative research measures that differ between how Mauritius successfully combated the global digital divide along with the obstacles; which hindered South Africa’s potential success of combating the digital divide; the challenges which prevented the success of South Africa.

    Essay Part 01;

    Additionally, Mauritius is a small island nation within the Sub-Saharan region of Africa. Mauritius has a population of about 1.2 million and an estimated 70% of the nation’s population aged between 15-64; as well as an estimated 88% are literate. Despite English being the official language; it spoke by less than 1% of the population, while the majority (80%) speak Creole.

    Interestingly enough, Mauritius existed previously colonized by both the Dutch and French; although either french or dutch are prominent languages in Mauritius. Mauritius adopted towards establishing an English-speaking nation after the colonial period; which has significantly helped them in the world trading market and ultimately increased the nation’s literacy rate to 88%. As well as “These efforts have​ been acknowledged in the e-government readiness ranking by United Nations”​.

    Furthermore, the government of Mauritius proposed a five-year National ICT Strategic Plan in 2007. This plan aspires to convert Mauritius into a favored hot spot for ICT skills, expertise, and employment in the region. Additionally, once Mauritius converts into an ICT hub; this will allow Mauritius to have the necessary skills they need to access the Internet without any challenges.

    Essay Part 02;

    Therefore, once this establishes the strategic plan also aims to target social indicators by the year 2011; which includes increasing personal ownership by at least 12,000 in primary schools, 20,000 in households; increasing broadband internet penetration by at least 250,000, and establishing 150 public internet kiosks across the island.

    Furthermore, the targeted installment of kiosks throughout Mauritius primarily in geographically located areas; such as rural neighborhoods has been positively linked with ICT use. Findings include that perceived usefulness and subjective norm are both factors that lead to the positive use of ICT.

    Perceived usefulness can define as “a degree to which an​ individual believes that using a particular technology would enhance performance”.

    To guarantee the relevance of internet kiosks, a diverse range of sources; such as internet browsing, word processing, health care, and e-mail is more efficient compared to third-party sources; which typically come at a cost. Thus, these advances are more likely to encourage the use of publicly subsidized Kiso’s.

    Essay Part 03;

    Additionally, the subjective norm positively links to ICT use. Subjective norm can define as “an individual’s perception of the extent to​ which important social referents would desire the performance of a behavior”, this factor is relevant in Mauritius. For example, if a relative or friend suggest the use of public internet kiosks is helpful and encourages one to make use of it; the individual is more likely to believe his or her friend or relative and in return has the motivation and intention to use the public internet kiosk.

    Moreover, this essay will focus on a case study of South Africa; and the challenges that this nation faces with combating the global digital divide. In the article Addressing the digital divide. Online information review (2001).​

    Cullen highlights that a major issue is the lack of physical access to ICT’s. The constraint with physical access to ICT use in South Africa is that the majority of ICT centers; and hubs are located in major cities as opposed to geographically isolated areas such as rural neighborhoods. Similarly, constantly commuting to these locations is not feasible along with another obstacle; which is the challenges that disabled people encounter.

    Essay Part 04;

    Therefore, not only are an absence of ICT use in rural areas but the commute cost is to the ICT centers is not feasible along with the significant challenges that these commutes can be for disabled people. Also, according to statistics on world connectivity; findings show that during the year 2000 South Africa’s number was 440,000 compared to Mauritius’s number of 1.8 million.

    Likewise, in the article Reevaluating the global digital divide: Socio-demographic and conflict barriers to the internet revolution. Sociological Inquiry (2010).​

    Robinson and Crenshaw, highlight a vital constraint towards Internet connectivity which is oftentimes dismissed. This constraint is the impact and influence that political leaders have on the nation. Nations which have liberal and democratic leaders are more likely to have citizens that are proactive and engaging in internet activity. Similarly, these leaders are also more likely to incorporate activities and programs; which motivate ICT use similarly to Mauritius Strategic Plan. Moreover, the turmoil of the post-apartheid conflict in South Africa is still significantly relevant in today’s society.

    Although this conflict occurred over 20 years ago, South Africa’s trajectory was stagnant for a few years; and it hasn’t been until recent presidential figures that democratic values became acceptable. Thus, this greatly impacts political institutions to confidently and successfully incorporate and introduce ICT; use simply because South Africa’s primary concern was moving past an apartheid government; basic values such as marrying someone of the opposite race; and freedom of speech were primary concerns rather than Internet connectivity.

    Essay Part 05;

    Additionally, in the article Information access for development: A case study at a rural community center in South Africa (2006).

    Jacobs and Hersleman argue the barriers which restrict ICT use in South Africa. These barriers include, “lack of awareness of the benefits of ICTs” ​ ​and”lack of ICT skills and​ support”. As mentioned above, South Africa is progressing rather slowly post-apartheid. This plays a significant role in the barriers of ICT use because; although they may have established ICT hubs in populated cities Capetown, Durban, and Soweto; there is one problem that contributes to both the lack of awareness of benefits and lack of ICT skills and support.

    This constraint is that “Facilities like community centers can assist by​ increasing user’s familiarity with technology in non-threatening, social settings”. Therefore, utilizing the staff and volunteers at community centers is imperative in increasing the motivation and engagement of ICT use; especially because incorporating ICT use at facilities such as community centers do not provide much use; if the community is unaware that these resources are available to them and how exactly they can access these resources.

    Essay Part 06;

    Furthermore, in the article Time machines and virtual portals: The spatialities of the digital divide. Progress in development studies ​ ​(2011).

    Graham highlights that cultural barriers play a significant role in contributing to the lack of Internet connectivity, information, and access. English is commonly spoken throughout South Africa; however, it does not speak outside of main cities such as Capetown and Durban.

    Also, not only is English not commonly spoken outside of these main cities, the degree of English use on computers or Kiosks and other forms of ICT are not at a beginner level.

    Ultimately, this creates a significant barrier to access to the Internet. Another challenge in South Africa is that has 10 main languages spoken throughout the country excluding English.

    The languages spoken depend on which region or part of South Africa an individual is in. Unlike Mauritius, English is not the most common and main language spoken in South Africa; therefore an alternative can be to provide translators at community centers or providing installing alternative language options on computer or kiosk settings.

    Essay Part 07;

    Moreover, in the article The impact of connectivity in Africa: Grand Visions and the mirage of inclusive digital development. The Electronic Journal of Information Systems in Developing​ Countries ​(2017).

    Friederici, Ojanperä, and Graham highlight that “telecommunication services have been​ found to lessen the financial vulnerability and susceptibility to shocks of poor households in South Africa”. Although, the poorest households may not necessarily benefit simply because they do not have access to telecommunication services.

    These constraints could be because poor households reside in rural areas; which do not have telecommunication services nearby and the commute cost is out of their means as well as the comprehension of English is poor. Unlike the Mauritius case study, the government and other institutions placed publicly subsidized kiosks in both rural and urban areas to mitigate the lack of mobility and accessibility as a constraint.

    Essay Part 08;

    Overall, the highlighted challenge that South Africa faces to combat or mitigate the global digital divide is the lack of physical access; ICT’s do not place in geographically isolated areas such as rural neighborhoods; thus making the commute costly and challenging for those with disabilities.

    Another challenge that South Africa faces is that there is a significant lack of awareness of present or available ICT’s and how exactly one can navigate ICT’s; community members do not encompass sufficient skills to navigate ICT’s. Furthermore, a final constraint that South Africa faces is a lack of English literacy.

    English spoke throughout South Africa, however, it is not the dominant language, and providing ICT’s with intricate levels of English is a barrier that prevents successful ICT use. Comparingly, Mauritius was successful in mitigating the global digital divide examples because English is their main language and they placed publicly subsidized kiosks in rural areas as well as provided the necessary skills needed to navigate kiosks.

    Essay Part 09;

    Moreover, as mentioned above there is a variety of contributing factors that have been constraints in South Africa’s success in combating the global digital divide. The supporting articles of the challenges that South Africa faces in this essay provide significant support for both quantitative and primarily quantitative research.

    Throughout, the articles surrounding South Africa, there has not been a great deal of statistical data as opposed to Mauritius. Additionally, South Africa’s recent history with geopolitical affairs and conflict plays a significant role in their trajectory towards combating the global digital divide.

    Mauritius had both fewer geopolitical and post-colonization obstacles compared to South Africa. As well, suggestions for South Africa are to place ICT’s in geographically isolated regions with different language options as well as advertising where ICTs can find along with having staff or volunteers at the ICT locations that are knowledgeable on how to successfully navigate ICT’s trajectory towards combating the global digital divide.

    It is clear that Mauritius had both fewer geopolitical and post-colonization obstacles compared to South Africa which allowed them to be successful in combating the global digital divide, further research needs to determine the current status of South Africa’s trajectory with the global digital divide. In conjunction, this paper demonstrates that if the government understands the need and importance of combating the global digital divide by incorporating publicly subsidized kiosks and other forms of ICT’s, it is possible to strengthen internet connectivity.

    The definition of Global Digital Divide with an Essay and Examples Image
    The definition of Global Digital Divide with an Essay and Examples! Image by StockSnap from Pixabay.
  • Big Data Visualization Techniques and Challenges

    Big Data Visualization Techniques and Challenges

    What are the Techniques and Challenges of Big Data Visualization in Information Systems Essay? This study examined what big data means with its importance of it and its usage in each industry along with visual analytics to drive success in their organization. Various types of big data analytics tools such as Tableau, PowerBI, SAS, etc. along with the comparison of the tools to discover the best fit based on a profile of a company and its goals also cover. We tried to examine how data visualization tools helped big technological giants to achieve competitive advantage taking care of the challenges that big data brings into visualization.

    Here is the article to explain, Techniques and Challenges of Big Data Visualization in Information Systems Essay!

    By 2025, it predicts that the value of data will increase by 10-fold. Virtually, every branch of industry or business will generate a vast amount of data. Thus, the world will experience aggressive growth and data could be a missed opportunity when not being utilized. And to make matter worse, the rate of collecting and storing data is faster than the ability to use them as tangible decision-making. With the help of ever-growing technology, visionaries are creating visualization methods to help turn raw data with no value into informative data.

    Big data has served a purpose for organizations to optimize their businesses. With an abundant amount of data that organizations generate every day, the ability to turn the data into a decision effectively and efficiently is crucial. Thus, the knowledge of analytics and visualization would come hand-in-hand to tackle the problem in big data. Hence, a new interdisciplinary research field of “Visual Analytics” is being established, in which its aims to make the best possible use of the information by combining intelligent data analysis with visual perception. The visual analytics knowledge has been quite useful to the two most common streams of the profession in the Big Data world, Data Scientists and Business Analytics.

    Business Analytics;

    Business Analytics (BA) define as a data-centric approach that relies heavily on the collection, extraction, and analysis tools to enable data to use as an insight as well as decision-maker; which in most disciplines, is being used by top-management people. Previously, BA existed used to report what has happened in the past, although nowadays, with the massive volume of data that can generate; BA can exploit them to predict the future and also make breakthroughs.

    Data Science;

    Through Big Data, the need to create a reliable source of information and a business support system has invented a new and widespread business application of Data Science. However, the art of data science is multifaceted, it combined the skills of computer science, advanced analytical and statistical skills, and knowledge of methods of visualizing data. Although there has been no universally accepted definition of Data Science; it defines as a set of fundamental principles that support and guide the principled extraction of information and knowledge of data.

    One of the main thing that visualization can help is projecting a model that data scientist has built to the reader. They usually play with data that has hundreds of dimensions that do not have the usual mapping point thus standard visualization such as bar chart, will not work. Therefore, novel visualization employing Parallel Coordinates and others techniques, usually used in this type of data. Secondly, visualization can help the process of Data Mining, which is the process that scientists aim to automatically extract valuable information from raw data through an automatic analysis algorithm. Visualization has been found to give benefit for the process; and would help the analysis to arrive at the optimal point as it helps to appropriately communicate; the results of the automatic analysis which often hand in the abstract demonstration.

    Big Data Visualization Essay;

    In the Visual Analytics Process above, the data that has been collected is being transformed according to the streams. For the Business Analytics (BA), the transformed data is mapping into a visualization for a user to process into knowledge, usually in a form of decisions; then the knowledge is feedbacking into the data for continuous improvement and to enable analysts to a better conclusion in the future.

    For the Data Science (DS) stream, the transformed data is mining to build a model; that would help certain objectives, the overall approach of the data is problem-agnostic. When certain models have been built; they would need to visualize as well, or vice versa. There is a feedback loop in between models and visualization to get the right outcome for the objectives. Furthermore, the knowledge comes from either visualization or models themselves.

    In general, visualization works as a better and faster way to identify patterns or trends; and any correlation that would otherwise remain undetected with a text or numbers figure. And visualization also helps to approach the problem in a new and creative way; that would tap into the human’s cognitive brain to understand the information hiding behind a huge number of data. The human can also interact with the visualization; which can utilize to find more insights or to find the right questions.

    Techniques in Big Data Visualization;

    According to user requirements, the visualization techniques decide. Conventional visualization makes use of tables, Venn diagrams, entity-relationship diagrams, bar charts, pie charts for data visualization. Below is the list of visualization techniques for visualizing large amounts of data and getting insight into it are:

    • One-dimensional; It consists of one value per data item or variable. The histogram is the perfect example of it.
    • Two-dimensional; As the name suggests, it has two variables. Bar charts, pie charts, scatter plots, maps are the type of 2D visualization.
    • Three Dimensional; This visualization will give more information to the user in the form of slicing techniques, Iso-surface, 3D bar charts, etc.
    • Multi-Dimensional; It will give a clearer picture of the visualization by analyzing the variables from a different perspective. Parallel coordinates, Auto graphics, etc. are the type of such visualization.
    • TreeMap; Here the data neste in form of the rectangle which represents each branch of the tree.
    • Temporal Technique; It has the scalability of displaying the data in a timeline, time series, and scatter plot.
    • Network technique; It use when you want to present data collected from social media in the form of a network.

    Challenges for Big Data Visualization or Visual Analytics;

    The main challenge with visual analytics is to apply visual analytics to big data problems. Generally, technological challenges such as computation, algorithm, database, and storage, rendering along with human perception; such as visual representation, data summarization, and abstraction are some of the common challenges. “The top 5 challenges in extreme-scale visual analytics” as addressed in the publication by SAS analytics are as follows:

    • Speed requirement; In-memory analysis and expanding memory should utilize to address this challenge.
    • Data understanding; There must be proper tools and professionals; who are proficient in understanding the data underneath the sea to make proper insight.
    • Information quality; One of the biggest challenges is managing large amounts of data and maintaining the quality of such data. The data needs to understand and presented in the proper format that increases its overall quality of it.
    • Meaningful output; Using the proper visualization technique according to the data presented is necessary to bring meaningful output to the data.
    • Managing outliers; While you cluster the data for favorable outcomes; it is obvious that an outlier will exist. Outliers cannot neglecte because they might reveal some valuable information and must treate separately in separate charts.
    What are the Techniques and Challenges of Big Data Visualization in Information Systems Essay Image
    What are the Techniques and Challenges of Big Data Visualization in Information Systems Essay? Image by StockSnap from Pixabay.
  • Star Network Topology Essay Advantages Disadvantages Usage

    Star Network Topology Essay Advantages Disadvantages Usage

    Star Network Topology Essay, Benefits, Advantages, Drawbacks, Disadvantages, and their Usage; SYNOPSIS – This study focused on a star network topology. A star network is a local area network in which all devices directly link to a central point called a hub. Star topology looks like a star but not exactly a star. The findings from the study revealed that in star topology every computer connects to a central node called a hub or a switch.

    Here is the article to explain, Advantages, Disadvantages, and Usage of Star Network Topology!

    A hub is a device where the entire linking standards come together. The data that is transmitted between the network nodes passes across the central hub. The project further goes on to explain the advantages, disadvantages, and usage of star network topology. The centralized nature of a star network provides ease while also achieving isolation of each device in the network. However, the disadvantage of a star topology is that the network transmission is largely reliant on the central hub. If the central hub falls short then the whole network is out of action.

    Star networks are one of the most common computer network topologies that used in homes and offices. In a Star Network Topology, it is possible to have all the important data backups on the hub in a private folder, and this way if the computer fails you can still use your data using the next computer in the network and accessing the backup files on the hub. It has been realized that this type of network offers more privacy than any other network.

    Introduction to Star Network Topology;

    The main objective of this project is to discuss the advantages, disadvantages, and usage of star network topology. A topology is a physical structure of a network. Star topology is a network structure comprising a central node to which all other devices attach directly and through; which all other devices intercommunicate. The hub, leaf nodes, and the transmission lines between them form a graph with the topology of a star.

    Star is one of the most and oldest common topologies in the local area network. The design of star topology comes from a telecommunication system. In a telephone system, all telephone calls are managed by the central switching station. Just like in star topology each workstation of the network connects to a central node, which is known as a hub. Hub is a device where the whole linking mediums come together. It is responsible for running all activities of the network. It also acts as a repeater for the data flow.

    Generally, when building a network using two or more computers, you need a hub. It is possible to connect two computers directly without the need for a hub but when adding a third computer to the network, we need a hub to allow proper data communication within the network. In a Star Network, the whole network is reliant on the hub. Devices such as file servers, workstations, and peripherals are all linked to a hub.

    Data define;

    All the data passes through the hub. When a packet comes to the hub it moves that packet to all the nodes linked through the hub but only one node at a time successfully transmits it. Data on a star network exceeds through the hub before continuing to its target. Different types of cables exist used to link computers such as twisted pair, coaxial cable, and fiber optics. The most common cable media in use for star topologies is unshielded or shielded twisted-pair copper cabling. One end of the cable plug in the local area network card while the other side connects with the hub.

    Due to the centralization in a star topology, it is easy to monitor and handle the network making it more advantageous. Since the whole network is reliant on the hub, if the whole network is not working then there could be a problem with the hub. The hub makes it easy to troubleshoot by offering a single point for error connection at the same time the reliance is also very high on that single point. The central function is cost-effective and easier to maintain.

    Star topology also has some drawbacks. If the hub encounters a problem then the whole network falls short. In a Star Network Topology, it is possible to have all the important data backups on the hub in a private folder, and this way if the computer fails you can still use your data using the next computer in the network and accessing the backup files on the hub.

    BACKGROUND STUDY;

    In this section, the researcher has clarified and explained in detail some of the advantages, disadvantages, and usage of star topology. These three concepts are the main core of this project.

    Benefits or Advantages of Star Network Topology;

    The following benefits or advantages of the star network topology below are.

    Isolation of devices:

    Each device isolate by the link that connects it to the hub. By so doing it makes the isolation of the individual devices simple. This isolation nature also prevents any noncentralized failure from affecting the network. In a star network, a cable failure will isolate the workstation that it links to the central computer, but only that workstation will isolate. All the other workstations will continue to function normally, except that they will not be able to communicate with the isolated workstation.

    Simplicity:

    The topology is easy to understand, establish, and navigate. The simple topology obviates the need for complex routing or message passing protocols. As noted earlier, the isolation and centralization simplify fault detection, as each link or device can probe individually. Due to its centralized nature, the topology offers the simplicity of operation.

    If any cable is not working then the whole network will not affect:

    In a star topology, each network device has a home run of cabling back to a network hub, giving each device a separate connection to the network. If there is a problem with a cable, it will generally not affect the rest of the network. The most common cable media in use for star topologies un-shields twisted-pair copper cabling. If small numbers of devices utilize in this topology the data rate will be high. It is best for short distances.

    You can easily add new computers or devices to the network without interrupting other nodes:

    The star network topology works well when computers are at scattered points. It is easy to add or remove computers. New devices or nodes can easily add to the Star Network by just extending a cable from the hub. If the hub adds a device for example a printer or a fax machine, all the other computers on the network can access the new device by simply accessing the hub. The device need not install on all the computers in the network. The central function is cost-effective and easier to maintain. If the computers are reasonably close to the vertices of a convex polygon and the system requirements are modest. And also when one computer falls short then it won’t affect the whole communication.

    Centralization:

    The star topologies ease the chance of a network failure by linking all of the computers to a central node. All computers may therefore communicate with all others by transmitting to and receiving from the central node only. Benefits from centralization: As the central hub is the bottleneck, increasing the capacity of the central hub or adding additional devices to the star, can help scale the network very easily. The central nature also allows the check-up of traffic through the network. This helps evaluate all the traffic in the network and establish apprehensive behavior.

    Easy to troubleshoot:

    In a star network, the whole network is reliant on the hub so if the entire network is not working then there could be a problem with the hub. This feature makes it easy to troubleshoot by offering a single point for error connection ad at the same time the dependency is also very high on that single point

    Better performance:

    Star network prevents unnecessary passing of the data packet through nodes. At most 3 devices and 2 links involve in any communication between any two devices which are part of this topology. This topology encourages a huge overhead on the central hub, however, if the central hub has plenty of capacity, then a very high network used by one device in the network does not affect the other devices in the network. Data Packets send quickly as they do not have to travel through any unnecessary. The big advantage of the star network is that it is fast. This is because each computer terminal attaches directly to the central computer.

    EASY INSTALLATION:

    Installation is simple, inexpensive, and fast because of the flexible cable and the modular connector.

    Drawbacks of Disadvantages of Star Network Topology;

    The following drawbacks or disadvantages of the star network topology below are.

    If the hub or concentrator fails, the nodes attached disable:

    The primary disadvantage of a star topology is the high dependence of the system on the functioning of the central hub. While the failure of an individual link only results in the isolation of a single node, the failure of the central hub renders the network inoperable, immediately isolating all nodes.

    The performance and scalability of the network also depend on the capabilities of the hub.

    Network size is limited by the number of connections that can be made to the hub, and performance for the whole network is limited by its throughput. While in theory traffic between the hub and a node isolated from other nodes on the network, other nodes may see a performance drop if traffic to another node occupies a significant portion of the central node’s processing capability or throughput. Furthermore, the wiring up of the system can be very complex.

    The primary disadvantage of the star topology is the hub is a single point of failure:

    If the hub were to fall short the whole network would fail as a result of the hub being connected to every computer on the network. There will be a communication breakdown between the computers when the hub fails.

    Star topology requires more cable length:

    When the network is being extended then there will be the need for more cables and this result in intricate installation.

    More Expensive than other topologies:

    It is expensive due to the cost of the hub. Star topology uses a lot of cables thus making it the most costly network to set up as you also have to trunk to keep the cables out of harm’s way. Every computer requires a separate cable to form the network. . A common cable that exists used in Star Network is the UTP or the unshielded twisted pair cable. Another common cable that exists used in star networks is the RJ45 or the Ethernet cables

    Usage of Star Network Topology;

    Star topology is a networking setup used with 10BASE-T cabling (also called UTP or twisted-pair) and a hub. Each item on the network connects to the hub-like points of a star. The protocols used with star configurations are usually Ethernet or local-talk. Token Ring uses a similar topology, called the star-wired ring.

    Star Topology is the most common type of network topology used in homes and offices. In the Star Topology, there is a central connection point called the hub which is a computer hub or sometimes just a switch. In a Star Network, the best advantage is when there is a failure in the cable then only one computer might get affected and not the entire network.

    Star topology exists used to ease the probabilities of network failure by connecting all of the systems to a central node. This central hub rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only.

    Other Use;

    Star network exists used to transmit data across the central hub between the network nodes. When a packet comes to the hub it transfers that packet to all nodes connected through a hub but only one node at a time successfully transmits it.

    In local area networks where the star topology exists used, each machine is connected to a central hub. In contrast to the bus topology, the star topology allows each machine on the network to have a point-to-point connection to the central hub and there is no single point of failure. All of the traffic which transverses the network passes through the central hub. The hub acts as a signal booster or repeater which in turn allows the signal to travel greater distances.

    When your network must increase stability and speed; the star topology should consider. When you use a hub, you get centralized administration and security control, low configuration costs, and easy troubleshooting. When one node or workstation goes down, the rest of your network will still be functional.

    APPENDIX;

    As the name suggests, this layout is similar to a star. The illustration shows a star network with five workstations or six if the central computer acts as a workstation. Each workstation shows as a sphere, the central computer shows as a larger sphere and is a hub, and connections show as a thin flexible cable. The connections can be wired or wireless links.

    The hub is central to a star topology and the network cannot function without it. It connects to each separate node directly through a thin flexible cable (10BASE-T cable). One end of the cable plug into the connector on the network adapter card (either internal or external to the computer) and the other end connects directly to the hub. The number of nodes you can connect to a hub determines by the hub.

    CONCLUSION;

    A star network is a local area network in which all computers are directly connected to a common central computer. Every workstation indirectly connects to every other through the central computer. In some star networks, the central computer can also operate as a workstation. A Star Network Topology is best suited for smaller networks and works efficiently when there is a limited number of nodes. One has to ensure that the hub or the central node is always working and extra security features should add to the hub because it’s the heart of the network.

    To expand a star topology network, you’ll need to add another hub and go to a “star of stars” topology. In a Star Network Topology, it is possible to have all the important data backups on the hub in a private folder, and this way if the computer fails you can still use your data using the next computer in the network and accessing the backup files on the hub.

    Benefits Advantages Drawbacks Disadvantages Usage of Star Network Topology Essay Image
    Benefits, Advantages, Drawbacks, Disadvantages, and Usage of Star Network Topology Essay; Image by OpenClipart-Vectors from Pixabay.

    References; Advantages, Disadvantages, and Usage of Star Network Topology. Retrieved from https://www.ukessays.com/essays/technology/star-network-topology.php?vref=1