Programming Languages

My experience creating a simple game in MIT's Scratch and exploring programming languages from the textbook.

GuitarTuna: Mobile App Review

It's a tuner on your phone! Check it out!

The Role of Applications

Taking a look at using different applications to document a day in my life

Traveling Through a Network

Using ping and traceroute to evaluate a network

Computers in the Workplace

My observations of the role of technology in the insurance industry

Network Security

Reflections on some common security threats

Thursday, January 27, 2022

Newbie to Newbie Blog Part Two

 

Today we will be looking at algorithmic design and data structure techniques, and we will explain how to use these concepts to develop the best tools for our needs. Note the distinction for our needs, as no single tool is best suited for all jobs. Similarly, no single algorithm or data structure will necessarily be the best for all programming applications. We will examine some common algorithms and data structures, point out their strengths, and identify some applications for which these algorithms and data structures may be well suited.

 

So, what would make an algorithm or data structure the best tool for the job? We can evaluate algorithms and data structures regarding how efficiently they use resources. Often, the resources in question are time and memory. Therefore, the best algorithm or data structure often produces the most work while consuming the least amount of time and memory.

 

One way to measure the efficiency of an algorithm is to count the rate at which the number of operations grows in proportion to the number of elements in a data structure. If the number of operations multiplies as elements are added, more resources will be needed to sustain operations. A steep growth curve like this would have high Big-O complexity.

 

Big-O complexity is a data structure analysis concept used to compare different algorithms' general resource consumption potential. Some common growth rates are O(1), O(logn), O(n), O(n^2), and O(n!). We can observe algorithms falling within the bounds of these growth rates based on the maximum limit of resources consumed. The graph below illustrates different growth rates in terms of Big-O complexity.



As you can see, each curve uses different amounts of resources (operations) depending on how many elements there are. You might notice how an algorithm with O(nlogn) complexity could perform better than an algorithm with O(n) complexity up to a point. Still, the O(n) algorithm maintains a linear growth rate which eventually proves more efficient for more considerable inputs while O(nlogn) curves exponentially toward greater complexity.

Another aspect to consider when deciding how to implement data structures is the program’s goal. For example, if you have data that you will need to sort or search, or if you will be inserting and removing elements from specific points in the data set, you might consider using a list structure. Linked lists and arrays are the most common list data structures, and either can perform these functions. Stacks are another data structure like lists, but their functionality is more limited. Stacks can add (push) or remove (pop) elements only from one end. Stacks are last-in, first-out, or LIFO because only the most recently added element can be popped off the stack. Since stacks are simpler than lists, it can be more efficient to use them where lists are excessive. A Queue is like a stack because elements can only be added to one end, but a queue is first-in, first-out, or FIFO instead of LIFO, as we see with stacks. 

Aside from our primary data structure, we must also consider how we will implement the functions we need to perform. Searching and sorting are some of the most common functions used in computing, and there are several different algorithms available for each of these tasks. One option for searching is to go straight through a list, end to end, until you find the element you are looking for, known as a linear search. On average, you will have to check half of the elements in a list before finding the target while using linear search. This is not the most efficient searching method, but it is often acceptable for smaller data sets and is simple to implement. A more advanced search algorithm that can dramatically outperform linear search for large data sets is a binary search. Binary search cuts the list in half repeatedly until it finds the target. However, binary search can only be performed on a sorted list. Sorting a list introduces more time and space costs to the overall implementation, so one must find a balance where the benefits outweigh the costs. Otherwise, more straightforward options like linear search might be more appropriate. 

Being a newbie to programming, my first concern in developing a structured program would be to get it to work. From there, I could turn to more sophisticated approaches to algorithmic design and data structure techniques. I have gained a solid foundation of knowledge through this course which will help me identify and understand the goals and methods of algorithm and data structure design, and I’m looking forward to applying that and building on it as I continue to learn.





Thursday, December 16, 2021

Java Installation and OOP Concepts

 

This week we start our explorations into data structures and algorithms through CPT 307: Data Structures & Algorithms. This course makes use of the Java programming language, and our first task was to install Java and an integrated development environment (IDE) of our choice. I followed the course guidance and downloaded NetBeans to use as my IDE. However, first I downloaded and installed the latest Java SE Development Kit. Installation was a breeze after deciding on which installation pack to download (I chose the MSI installer package). After installing, we were tasked with creating a simple “Hello World” program based on the tutorial provided. The tutorial was written based on an earlier version of NetBeans, and this video helped to fill in the gaps and successfully run my first program in Java:

 



Java is often referred to as an object-oriented programming (OOP) language. OOP is implemented through an organization of objects. To understand what that means, you should first understand what a class is. Nakov in Fundamentals of Computer Programming described classes as being descriptions or models of real entities, which possess characteristics and display behaviors. An object would be an instance of such a class. One of the great benefits of objects in programming is that they can be used to represent complex ideas in a simple way – that is we can use an object without necessarily understanding all its inner workings. This is one of the four key concepts of OOP and is known as abstraction. Abstraction makes programming easier and more flexible.

Inheritance is another essential property of OOP. It allows classes to maintain properties as they are passed down through a hierarchy. If a parent class possesses a certain property, then its child class inherits that property without the need to obtain it independently. A class can pass down traits to a superclass which can further pass down traits to subclasses as described in Oracle’s Java Tutorials.

Encapsulation is another main concept of OOP and refers to how variables of a class are hidden from other classes. Encapsulation can be achieved in Java by declaring variables as private. This allows control over what variables in a class can be read or written by other classes.

Polymorphism is a final major concept of OOP that we will discuss.  Java T point describes polymorphism by defining the Greek words it comes from: poly (many) and morphs (forms). In programming this translates to potential to perform a single action in more than one way. In Java, polymorphism can be implemented at either compile time or runtime and is achieved by method overloading or method overriding. Method overloading can happen when a class has multiple methods with the same name but different parameters. Method overriding works with inheritance and can happen when a child class has the same method as its parent class.

Monday, December 13, 2021

CPT304 Reflection

 

CPT304: Operating Systems Theory & Design: Retrospective

           

            Looking back at the past five weeks learning about operating systems in this course, I now reflect on what I have learned and address the six discussion points while providing revised versions of the visual aids that I developed throughout the course.

 

Describe features of contemporary operating systems and their structures.

 

           There is no single contemporary operating system, and the unique goals of each computing environment influence the features and structures of each system. For example, a peer-to-peer network would be constructed with connectivity and security as high priorities, while real-time embedded systems require reliability and strict adherence to timed processes. In general, operating system must provide a user interface, manage resources, and execute programs.

           User interfaces can range from command line prompts to complex graphical interfaces. These environments allow humans to interact with an operating system through input and output devices. The operating system must manage resources include memory, storage, file systems, and I/O devices. The main goal behind managing these resources is to allow programs to execute. The operating system must coordinate the kernel and userspace to control access to sensitive system resources without allowing data corruption by the user or application. The contemporary operating system is interrupt-driven, a property that allows for switching of resources between the user and the protected kernel, and system calls allow programs to make requests of the operating system (Silberschatz et al., 2014). The separation of policy from mechanisms first becomes apparent in the design of an operating system and is a common theme across many aspects of operating system implementation.





Discuss how operating systems enable processes to share and exchange information.

 

           A process refers to a program in execution, and it includes program activity, stack, data section, and heap (dynamically allocated memory) (Silberschatz et al., 2014). A process control block (PCB) includes all components of a process and its interactions with system resources such as process state, program counter, CPU registers, CPU scheduling, memory management, accounting, and I/O information (Silberschatz et al., 2014). 

           Coordinating concurrent threads has become a significant focus of operating system design with multi-threaded processes. A thread is the basic unit of CPU utilization and includes thread ID, program counter, register set, and stack (Silberschatz et al., 2014). Multi-threaded processes have significant benefits, including program responsiveness, resource sharing, the economy of resources, and scalability (Silberschatz et al., 2014). There are three main multi-threading models: many-to-one, one-to-one, and many-to-many. The one-to-one model is probably the most common in modern multiprocessing systems and is characterized by allowing multiple threads to run in parallel (Silberschatz et al., 2014). In the absence of parallel processing, processes may still run concurrently, meaning each process can make progress through its execution over time, but not necessarily simultaneously (the operating system can switch rapidly between processes to give the appearance of parallel processing). There are obstacles to overcome in process synchronization, including the critical-section problem, which refers to the system engaging the critical section of one process at a time. A critical section can include parts such as changing variables, updating tables, or writing to a file. Peterson's solution to the critical section problem uses flags and while loops to achieve mutual exclusion, progress, and bounded waiting to avoid race conditions.




 

 

Explain how main memory and virtual memory can solve memory management issues.

 

           Memory management issues arise from pursuing memory management goals, which include relocation, protection, organization, and sharing. Relocation refers to moving processes from one part of memory to another, and it is accomplished by mapping virtual to physical addresses in real-time (Silberschatz et al., 2014). Protection is necessary to preserve memory locations from being overwritten in error. It is especially crucial to ensure that kernel memory cannot be overwritten by user processes, which can be accomplished through limit registers (Silberschatz et al., 2014). Managing the transfer of processes between storage or virtual memory and main memory falls under the principle of organization.

           Virtual memory can be used in place of storage devices in traditional computing, such as disk drives to extend the main memory capacity. Virtual memory allows processes to share memory through the technique known as demand paging, which will enable processes to use only portions of memory instead of loading a whole process into physical memory (Silberschatz et al., 2014). Virtual memory can also use copy-on-write, allowing parent and child processes to share pages (Silberschatz et al., 2014). Page replacement is a technique used to mitigate page fault penalties, but it is challenging to program and is usually implemented with an approximation of least recently used protocols (Silberschatz et al., 2014).




 

Explain how files, mass storage, and I/O are handled in a modern computer system.

 

           The file system objectives in an operating system include creating files, writing files, reading files, repositioning within files, deleting files, and truncating files (Silberschatz et al., 2014). Multiple users or processes often must share access to files which creates the need for techniques such as file locks to ensure access is limited to a single process (Silberschatz et al., 2014). Modern operating systems often use page caching with virtual addresses as a more efficient substitute to physical disk blocks (Silberschatz et al., 2014). Modern operating systems can use multiple file directory structures, including single-level, two-level, tree-structured, acyclical-graph, and general graph directories. Each structure is characterized by interactions between directories, subdirectories, and files. Most structures do not allow sharing of files, but acyclical graph structures and general graph structures do. However, general graph structures also allow self-referential cycles, which can compromise the file system integrity.

           Mass storage is commonly implemented on hard disk drives (HDDs), which store binary data on magnetic disc platters, which a reading head mounted on a disk arm can later recover and load to main memory. HDDs are non-volatile storage and will retain the information recorded after power is removed. Solid-state disks (SSDs) are another common mass storage device that uses flash memory. SSDs are faster than HDDs, and more degradable from repeated read / write cycles. Magnetic tapes can also be used for mass storage. They are stable but very slow and are used mainly for enterprise backups. The operating system is responsible for scheduling disk access for HDDs. Several algorithms are used to determine the order in which each pending request is serviced. The first-come, first-served method is the simplest and fairest method to pending requests but is often the least efficient. The shortest-seek-time-first or SSTF algorithm can be more efficient than the FCFS but may lead to starvation of requests located on remote cylinders on the disk platter. The SCAN algorithm can strike a compromise between fairness and efficiency as the disc arm oscillates between extremes of the cylinder space while reading pending requests along the way. In some cases, SCAN can even be more efficient than SSTF.

           I/O devices include storage devices like disks, transmission devices like network connections, and human interface devices like screens, keyboards, microphones, clocks, and speakers, among others (Silberschatz et al., 2014). The operating system interacts with I/O devices through ports and busses, such as PCI, SATA, SCSI, and ATAPI. The kernel I/O subsystem interacts with the device drivers on the software side, which in turn interacts with device controllers on the hardware side, making the devices work. Polling and interrupts are methods that establish communications throughout the system (Silberschatz et al., 2014). Direct memory access (DMA) allows access to device controllers directly without occupying CPU resources. This method is most often used in the transfer of large amounts of data, such as between the hard drive and memory (Silberschatz et al., 2014)




 

Outline the mechanisms necessary to control the access of programs or users to the resources defined by a computer system.

 

           Protection controls access of programs or users to the resources defined by a computer system. Protection prevents violation of access restrictions and, more generally, ensures that program components only access resources in ways defined by stated policies (Silberschatz et al., 2014). An essential idea governing domain access is the principle of least privilege, which indicates that users and processes should possess the minimum authority needed to execute their tasks (Silberschatz et al., 2014). Traditionally, domain-based protections use access matrices to document accesses allowed to objects by domains. Domain-based protections are usually implemented with access control lists (ACLs) which specify which domains can access which objects. ACLs provide a mechanism to add, remove, or change domain access through the object side of the access matrix. Capabilities lists require each domain to gain access to objects indirectly through capabilities like a token that specifies which object and which authority the domain is allowed. Capabilities lists can operate without the burden of searching through entire access lists to confirm the authority to access, but they can it more difficult to revoke access since there will not be a list of domains with access for each object. Language-based protection is developing alongside high-level languages to allow users to more flexible and efficient permissions control.

           Security works with protection to prevent harm from external environmental factors. Secure systems exist only if resources are used and accessed as intended. It is impossible to achieve total security, but mechanisms must still be in place to minimize security breaches (Silberschatz et al., 2014). Typical security breaches include breach of confidentiality, integrity, breach of availability, theft of service, and denial of service (Silberschatz et al., 2014). The four levels of security include physical, human, operating system, and network. Physical security requires protection from attacks on the machines and facilities in which operating systems function. Human security involves protection from intentional or accidental breaches resulting from tactics such as social engineering or attempting to otherwise gain access outside of the computing environment to gain unauthorized access. Operating systems can be susceptible to accidental breaches by way of runaway processes which create a denial-of-service situation. Vulnerabilities created by stack overflows could also invite intentional attacks from unauthorized users. Externally launched denial-of-service attacks also constitute security breaches on the network level (Silberschatz et al., 2014). The mechanisms needed to protect against security breaches are as varied as the attacks themselves and range from on-site security and law enforcement to cryptology, user authentication, and firewalls.






 

Recommend how you will use these concepts about operating systems theory in future courses and/or future jobs.

 

           I came into this course with very little knowledge of operating systems, and I have found the content enlightening. I am certain that the lessons I have learned will be invaluable in my future courses and hopefully in my career. I do not currently work in the IT field, but I still use operating systems throughout my entire workday. I have gained insight into operating systems' design and functionality, which helps me understand how they work. Perhaps these concepts will aid me in troubleshooting computer problems of my own or helping others to fix their computers. I am also interested in expanding my knowledge of real-time embedded systems and their operating systems due to my background in electronics. I think this course has given me valuable insight I can use in that field and the IT field. This is an essential foundation of knowledge that I would not have gained without taking this course.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

 

Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials (2nd ed.). Retrieved from https://redshelf.com/

Sunday, August 29, 2021

History and Future of Computers


Today I will be examining the history and future of computers and relating this topic to the lessons I have learned throughout INT100. A survey of history will reveal several precursors to what we now call a computer, from the Pascalina mechanical calculator to the Jacquard weaving machine, which used punched cards as a form of mechanical memory containing pattern instructions for a loom. Perhaps most ubiquitous with early computing is the Difference Engine developed by Charles Babbage in the early 1800s. The Difference Engine was an assembly of mechanical movements which was designed to allow automatic production of mathematical tables through digital computation (Copeland, 2000). Computing machines were limited to mechanical movements to facilitate their functions until researchers in the 1940s developed a device using electronic vacuum tubes instead to decrypt secret messages (Copeland, 2000). However, these machines were limited to specialized functions and were not readily adaptable to more general-purpose tasks. The development of more sophisticated memory arrangements involving magnetism further propelled computing into the modern age.

Vacuum tubes require significantly high voltages to operate and occupy significant physical space, so it is natural that a smaller and more efficient mechanism would be preferred if it could perform the same functions. This mechanism came in the form of the transistor, which further revolutionized the development of computing machines and allowed significantly reduced form factor, which we can observe today, as computers have become increasingly small, and the density of transistors that can occupy a silicon substrate has increased exponentially. For much of history, computers were generally so large that they occupied entire rooms. The 1970s saw the introduction of personal computers such as the Altair 8800 and the Apple II, which sparked the industries we see today that have made computers accessible to an unprecedented number of people (Ceruzzi, 2010).

Computers certainly did not stop developing in the 1970s, and we have seen a continuation and acceleration of the development of smaller, more powerful, and cheaper machines since then. Partly due to the ever-increasing density of transistors that can occupy a silicon wafer, processors have continued to become faster and more capable of performing increasingly complex tasks every year. The development of other hardware components, such as solid-state drives, has recently increased the speed at which a computer can access data.

Although the future is unpredictable in computer technology, recent developments in computer networking and hardware hint at some possible trends we might see growing in the future. Cloud computing uses the advanced state of computer networks to transfer data storage from local hardware to massive offsite servers. Cloud computing provides convenient access and portability to users working across multiple devices, often at the cost of a subscription fee. We see an increasing philosophy of software as a service rather than as a product. In the past, most software products were purchased through a one-time transaction for an unlimited license to use the software. More recently, software companies have instead charged recurring subscription fees for a time-limited license to use their software. When the time limit expires, the user will need to renew their subscription or forfeit their license to use the software. This trend appears only to be increasing, and I would expect it to continue to expand into the future.

Recently, the term “quantum computing” has increasingly come up in speculation of the future of computing. Quantum computing stems from the well-known but difficult-to-understand field of quantum physics (Bova et al., 2021). Currently, computer algorithms are rooted in classical physics, whereby an object can occupy only one point in space and time. Thinking of binary logic: a switch can be either on or off, which is the fundamental property on which all computing is currently based. The advantage of quantum computing would be that the information does not need to exist in either an on or off state. It could exist in both or somewhere in between, which is an exciting prospect in the field of computing because it would mean that a computer would not have to cycle through many iterations of on’s and off’s to perform its functions as it currently does. Quantum computing promises much faster computing speeds because of this prospect. This means that any functions presently limited by the serial nature of digital logic could potentially be made possible or much easier with the advent of quantum computing.

I will more directly relate some aspects of the history of computers with the lessons I learned in INT100 below.

Fundamentals and History

The history and future of computers relate directly to the fundamentals of information technology. By observing the historical developments in hardware, software, and philosophy that have taken us to our current situation in information technology, we can identify the patterns that can inform our predictions for the future and inspire us to seek the future we would like to see actively. The history and future of computers show us how the concepts of information technology and computer science have developed over the years and how the development of major hardware components has directly influenced the usage and availability of computers. Since their introduction, computers have operated on digital logic, which has taken us from simple punch cards to the incredible computing power that we see today. Quantum computing is the first prospect that has challenged this fundamental means by which computers work and offers exciting possibilities for the future of computing.

Hardware, Programming Languages, and Software

Throughout the history of computing, we have seen the development of programming languages alongside advances in hardware components to take advantage of the possible applications for which we can use computers. Machine language in the most primitive early computer interfaces has developed into complex and adaptable high-level languages that abstract the granular instructions needed to direct a machine into coherent languages that humans can use to direct a computer’s functions easily. 

Databases, Networks, and Security

Application software has historically been developed to provide user-friendly interfaces with which people can control specific aspects of a computer’s functions. Individual computers and computer networks continue to develop to allow accessing and processing information in databases across multiple platforms. An application like Microsoft Excel might allow a user to process information in a database, while increasing access to the internet and computer networks has made this information more easily shared with others. 

As computer networks have expanded throughout history, they have become more vulnerable to security breaches. Indeed, computers have a long history with security, as some of the earliest computers were developed to breach enemy security by decrypting secret codes. Today, security threats come in the form of malware, DDoS attacks, and various other malicious activities designed to compromise computing systems. As security measures become more robust, so do the methods that malicious actors use in their attacks. Therefore, it has been necessary for network security to evolve throughout history, as it will continue to be so in the future.

 

References

Bova, F., Goldfarb, A., and Melko, R. (2021, July 16). Quantum computing is coming. What can it do? Harvard Business Review. https://hbr.org/2021/07/quantum-computing-is-coming-what-can-it-do

Ceruzzi, P. (2010, July). "Ready or not, computers are coming to the people": Inventing the PC. OAH Magazine of History. 24(3):25-28. https://www-jstor-org.proxy-library.ashford.edu/stable/25701418

Copeland, B. Jack. (2000, December 18). The modern mistory of computing. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/win2020/entries/computing-history/


Tuesday, August 24, 2021

Network Security


 

The benefits of computer networks are great and many, from facilitating the exchange of information to the procurement of goods and services through online markets. However, this interconnectedness also exposes each part of the network to certain risks. Today we will explore some of the more common threats to network security, from their causes to steps we can take to prevent and mitigate the damage they can inflict.

In an earlier discussion, we examined the use of ping and traceroute commands to analyze network performance, but did you know that these same utilities can be used maliciously to cripple a network? The main threat that ping commands pose to a network is the targeted repetition of ping requests toward a particular network node to overwhelm network bandwidth through a Denial of Service or DoS attack. The objective of a DoS attack is to compromise a network node to the point of being unresponsive to legitimate requests by the inundation of malicious echo requests. A Distributed Denial of Service or DDoS is a coordinated form of DoS attack coming from multiple machines, often without each operator’s knowledge using involuntary botnets (Yihunie et al., 2018). DDoS attacks pose a more significant threat to networks since their disruptive force can be multiplied through each infected machine. To successfully mitigate the damage of a DDoS attack, it is necessary to isolate malicious IPs from legitimate network traffic and limit their access to the network. It is also prudent to have contingencies, including access to alternative network bandwidth resources, to compensate for influxes in network traffic.

DDoS attacks are often made possible by another form of network security threat, social engineering. Social engineering attacks are designed to mislead people into granting access to privileged information by manipulating human behavior (Wang et al., 2021). In the example of a DDoS attack, the malicious party may solicit a response from the target by playing on their human nature, such as eliciting sympathy through a pleading email or misrepresenting themselves as a figure of authority, so the mark complies with their request. Once the target grants the access that the malicious party seeks, their personal information may be gathered and exploited, or their computer can be enlisted into a botnet by the malicious party with which they can execute a DDoS attack on another party. Since social engineering attacks rely on the target to voluntarily grant access (albeit usually through deceit), antivirus measures are ineffective in combatting them. Prevention of social engineering attacks requires vigilance and skepticism from the target to identify likely attacks and abstain from interacting with the attack or complying with the requests that it makes. Although security software will not necessarily prevent a social engineering attack, some applications can identify and remove malware that has already infected a system, so if a target falls prey to a social engineering attack, they have opportunities to rectify their error.

Social engineering also plays into our third type of network security threat: password cracking. Phishing is a type of social engineering that convinces a target to click on a link or attachment in an email that leads to malicious software being installed on the target’s computer. The malware can contain password-extracting software, or the attacker can persuade the target to volunteer their credentials under illegitimate pretense (Jancis, 2021). There are many different methods for malicious actors to obtain passwords from their targets, including the various forms of social engineering, brute force attacks, and modified variations of brute force attacks to narrow the guessing parameters based on secondary information such as personal details learned of the target. Brute force attacks try all possible combinations of a password until they hit on the correct password. The most effective methods to reduce the risk of compromised passwords include remaining vigilant of social engineering attacks and maintaining strong passwords and password hygiene. Strong passwords are unique, not used elsewhere, and have a sufficient number and diversity of characters to prevent successful password guessing and brute force attacks.

Malicious actors are indeed adaptive to improving network security measures, so our resolve to maintain network security integrity must remain strong to ward off the ever-evolving forms of attack targeting our networks. It may not be possible to guarantee that networks are impervious to security breaches, but good security hygiene, common sense, and healthy skepticism go a long way toward securing our networks.

 

 

References

Jancis, M. (2021, August 20). Most popular password cracking techniques: learn how to protect your privacy. Cybernews. https://cybernews.com/best-password-managers/password-cracking-techniques/

Wang, Z., Zhu, H. and Sun, L. (2021, 14, January). Social engineering in cybersecurity: effect mechanisms, human vulnerabilities and attack methods. IEEE Access Access, IEEE. 9:11895-11910. DOI: 10.1109/ACCESS.2021.3051633

Yihunie, F., Abdelfattah, E, and Odeh, A. (2018, May 4). Analysis of ping of death DoS and DDoS attacks. 2018 IEEE Long Island Systems, Applications and Technology Conference (LISAT) Systems, Applications and Technology Conference (LISAT), 2018 IEEE Long Island. DOI: 10.1109/LISAT.2018.8378010


Computers in the Workplace



I have worked in the insurance industry for many years, and my work has been almost exclusively on computers. Virtually every piece of information in my work and the industry is processed through a computer at some point. From servicing policies to paying claims, computers play a role in every step of the process. Computers allow employees to connect with each other and customers, process and share information, and carry out all the other tasks needed to keep business going.

One prominent trend in the insurance industry, as I am sure is true in many other industries, is automation. As computing power improves and machines become more adept at learning, employers can delegate more tasks to automated processes that require less human intervention. Computers can do many repetitive tasks, and artificial intelligence also shows promise with more complex work. For example, improvements in the performance of chatbots limits the need for human customer service employees, and insurance companies are eager to adopt applications that can analyze images and documents that would otherwise require a human to review. The prospect of autonomous vehicles also poses a potential for significant transformation of the auto insurance industry, from questions of liability to a possibly reduced exposure due to safer roadways.

Some degree of computer literacy is required for most insurance industry employees since most of the work is performed using computers. If someone cannot operate a computer, they will be unable to do the job. Most insurance industry employees will need to be familiar with common productivity software, like the Microsoft Office Suite and be able to adapt to various other applications as they roll out. More advanced computing skills are also in demand in the insurance industry, which employs a wide variety of IT professionals. It is also beneficial to understand how computers work in remote settings where on-site IT support is unavailable to service an employee’s computer. I predict that automation will continue to gain presence in the insurance industry in the next ten years. I believe that evolutions in hardware, operating systems, and networking will contribute to the development of more powerful artificial intelligence, which will continue to push the boundaries of what machines can do, reducing operating costs and offering more services to customers in the process.

 


Traveling Through a Network

The ping and traceroute activities showed me how packets travel through the network by hopping from multiple addresses until reaching the destination. Besides Google, I chose Volkswagen.de in Germany and Takamineguitars.co.jp in Japan for my destinations in this activity. I thought this would help me see how geographic location plays into ping and traceroute results. Starting with the destination closest to me, I sent four packets to Google in the United States with an average of 156ms ping. I also sent four packets to Volkswagen in Germany with a longer average ping of 255ms. Takamine in Japan had the longest average ping of 342ms. The average ping times correspond to the geographical distances of the destinations from my location. My ping results below:

Ping Google.PNG

Ping Volkswagen.PNG

Ping Takamine.PNG 

The traceroute activity was interesting because it showed me how many contacts it took for the packets to reach their destination. In the case of Google, I am not sure exactly where the location of the final destination is, but judging by the ping results from each hop, the packets look like they might be following a convoluted route because the last set of pings for the three packets were faster than some of the previous hops. This was not the case with Volkswagen or Takamine, which show a general increase in ping times the closer my packets get to their destination (and the further they get from me). Interestingly, all three traceroutes timed out on similar hops. Hops 3-6 timed out on each of my traceroute attempts. This tells me that my packets probably follow the same route up to the seventh hop, where the paths diverge toward their unique destinations. It also suggests that the routers at those locations are not responding to my requests. Maybe a firewall or something is blocking my IP and preventing my packets from routing through those points. Perhaps the facilities hosting those routers are not operating correctly. There have been some storms and high winds in my region lately, so it might be possible that part of my network is affected. It would be interesting and possibly helpful to know exactly where those locations are because they appear to be broken links in my network. My traceroute results below:

Tracert Google rd.png

Tracert Volkswagen rd.png 

Tracert Takamine rd.png

Sunday, August 22, 2021

Documenting a Day

 



Microsoft has provided a robust suite of content-creating software with Microsoft Office. Microsoft Word word processing software, Excel spreadsheet, and PowerPoint presentation software have unique strengths and applications. I have used the three different applications to document a day in my life, which allowed me to identify which application was most appropriate for each type of information presentation. Each application has unique functions; there are advantages and disadvantages for each application, and each application specializes in different aspects of information presentation.

Comparison of Application Functions

The word processor application is ideal for creating formatted text (Vahid & Lysecky, 2017). Spreadsheet applications allow users to organize, calculate, and visualize data. Presentation applications provide visually appealing and easily digestible delivery of information. These three types of applications can share features, such as editing text font or other properties and insert images. Word processing applications allow users to create primarily written documents with custom formatting, which allows a professional and uniform appearance. Spreadsheet applications offer the ability to organize information by dividing it into cells, which allows for sorting, calculating, and even creating visual representations of the data like charts and graphs. Presentation applications like PowerPoint will enable slideshows that can provide a visual aid to document information sequentially. Animations, images, and other effects help the user create a visually appealing presentation to engage audiences.

 

 Advantages and Disadvantages of Each Application

Word processing applications have the advantage of the ability to provide written information in an unlimited presentation. Word processors do not require information to be abbreviated for formatting concerns, as a presentation application might. The disadvantage of word processors would be that they cannot organize or sort data, and they do not allow the advanced visual presentation elements that a presentation application would.

Spreadsheets have the advantage of allowing the user to organize, sort, and calculate information and create visual representations of data such as charts and graphs. However, a spreadsheet is not usually best at presenting the information because it does not allow explanations of the data.

Presentation applications have the advantage of allowing appealing visual aids to help illustrate the information to an audience. This can help keep the audience interested in the information being presented. Presentation applications have the disadvantage of having limited space on each slide, so it may be necessary to abbreviate some of the information that would fit easily in a word processing application. According to NCSL (2017), slideshows can also become distracting with excessive animations and visual effects, so it is important to keep the focus on the content and avoid the temptation to use too many effects.

Recommendation for Application to Document a Day

Each application has its strengths, and the best application to present a documentation of a day might depend on the audience and purpose. For an analysis of time usage, a spreadsheet would be best. For telling a complete story with unabbreviated details, a word processing application would be better. A presentation application would work best as a visual aid, perhaps combined with an oral accounting of the day’s details. I preferred the word processor to describe my day because it allowed me to tell a complete accounting without the formatting limitations of a presentation application or the limitation to quantitative data of a spreadsheet.

There are other applications that each type of software would be best for. A word processing application would be best for writing an essay, creative writing assignment, or technical manual. A spreadsheet would be best for analyzing quantitative data and creating graphs and charts, documenting an inventory, or calculating statistical data. Presentation software would be valuable for outlining the talking points of an oral presentation and providing the audience a visual reference for the talking points or creating an educational presentation. To determine which application is best suited for a given task, consider the audience and purpose of the information.

 

 

 References

NCSL (2017, August 8). Tips for making effective PowerPoint presentations. National Conference of State Legislatures. https://www.ncsl.org/legislators-staff/legislative-staff/legislative-staff-coordinating-committee/tips-for-making-effective-powerpoint-presentations.aspx

Vahid, F., & Lysecky, S. (2017). Computing technology for all. Retrieved from zybooks.zyante.com/