Computer Science

Computer Science (11)

Children categories

SDLC Models

SDLC Models (0)

SDLC Models

Name of student

Course Title

Instructor

Date of submission


Abstract

SDLC models are significant in various large scale software and systems development projects as they outline the structure and guidelines of the process. SDLC models exist in various types depending on the number of phases. This research paper describes the 7-step and the 4-step SDLC models, and compares and contrasts the differences and similarities in each model.


INTRODUCTION

Systems Development Lifecycle (SDLC) also referred to as Application Development is a systems development approach to problem-solving that comprises of various phases, each with specified steps. (Liberto, 2003) Each phase requires inputs and has specifically outlined activities, and its outcome is the input for the next phase in the model. SDLC model is a staple tool for information systems developers and analysts as its applications are widespread in various large-scale software and systems development projects. An SDLC model provides guidelines and structure of a software development process and also prescribes several documents and outcomes or deliverables for every phase.

7-STEP SDLC MODEL

Initiation Phase

The initiation phase commences when the management or any other stakeholders in an organization identify a problem that can get solved by the development or modification of an information system. The initiation phase identifies and validates an opportunity to enhance business performance, outlines necessary assumptions and constraints to the solution of the requirement and recommends the exploration of other alternative ways to solve the requirement. (Michigan Technological University, 2010) The initiation phase also identifies the project scope under study and formulates a suitable solution for the problem. The initiation phase puts consideration into various factors such as cost, time, resources and the short-term and long-term benefits. The organization has to identify the IT systems that fit its strategic objectives.

Feasibility Phase

It is the initial investigation of the problem to determine whether the project development should get adopted. If the project gets pursued, the feasibility study develops a project plan and budget estimates that will get used in future phases of the project.

Requirements Analysis Phase

The requirements analysis phase makes use of high-level requirements that got established in the previous phase to outline detailed functional user requirements. The phase produces measurable and testable requirements that relate to the business need. These requirements get used in the systems design phase hence should be adequate. The purpose of undertaking the phase is to develop the evaluation and test requirements, and a comprehensive data and process model such as system input and outputs.

Design Phase

In this phase, the system gets designed to meet the functional requirements that got established in the requirements analysis phase. There are various elements considered in the design phase to mitigate risk namely, the performance of a security risk assessment, identification of possible risks and setting mitigation features, allocation of processes to resources, and development of a plan to transfer data to the new system.

Development Phase

During the development phase detailed requirements and design get translated into system components, individual units get tested for usability, and the IT system get prepared for integration and testing. The new system gets tested to check for errors, bugs, and interoperability. (Babers, 2015) User acceptance also takes place in this phase as the business validates whether functional requirements have been satisfied in the new system.

Implementation phase

During the implementation phase, the new system gets installed in the organization departments to perform the intended business functions. The activities involved include user training and distribution of user documentation, installation of software and hardware, and system integration into the organization processes. (Michigan Technological University, 2010) During this phase, careful project management gets required due to the potentiality of missed deadlines, project creed, and cost overruns.

Operation and Maintenance

The new system installed in the implementation phase gets monitored in the operation and maintenance phase. The purpose of monitoring is to ensure that it performs in agreement with the specified user requirements identified in the planning phase. The new system also undergoes maintenance and modification so as to ensure effective and efficient functionality. The various kinds of maintenance that a system undergoes through include adaptive, corrective, preventive, and perfective. Modification of the new system allows for the addition of new features and benefits that improve its performance. (Shelly, & Rosenblatt, 2011) Any proposed changes to the new system require the developers to return to the scoping step and proceed through the cycle so as to best evaluate, design and undertake the changes.

4-STEP SDLC MODEL

The four-step phase develops a system through four numbered stages namely, Planning, Development, Implementation, and Review. (Liberto, 2003)

Planning

The activities performed in the planning phase include the investigation and gathering of the requirements, undertaking the feasibility study, and the reviewing to streamline the business requirements. In the investigation and gathering of the requirements, the user and analysts identify the problem that needs a solution. (Khosrowpour, 1996)  A feasibility study gets undertaken to determine the necessity of the project. The deliverables achieved from this phase include a final list of the requirements and a requirements understanding document.

Development

The outcome of the planning phase gets translated into a system design plan. The process involves activities such as programming, procedure and development. In programming, coding and testing of the programs in the system gets undertaken, whereas in the procedure and development involves the writing and the testing of procedures for different users. The phase also involves a risk analysis assessment to identify the potential risks and the alternate solutions. The deliverables achieved from the development phase include a document highlighting the respective risks and mitigation plans.

Implementation

During the implementation phase, the actual development and testing of the system takes place. Actual development involves the translation of the deliverables of the development phase into a program code. Emphasis should get focused on meeting the requirements of the organization and developing a suitable design. The implementation phase results in the development of a system code, system case, and results, and the test summary report as well as the defect report. (Shelly, & Rosenblatt, 2011) The implementation phase also involves training and education of users on how to operate the new system. It also involves the transformation of the current system into the new system using initialization of the parallelism conversion method.

Review and Evaluation

During the review and evaluation phase, the end-users of the new system evaluate it and provide their feedback on the system. The specialists and developers make use of the user feedback to make modifications to the system. Tests also get conducted on the outcome of the implementation phase, to find out whether the system functions and if it satisfies the business requirements identified in the planning phase. The review phase is significant in the development of information systems because it provides specialists and analysts with an opportunity for growth and improvement. (Liberto, 2003) The review also indicates when the usability cycle of the present system approaches the end, and also the new life cycle of the new system also gets indicated.

Compare and Contrast 7-Step and 4-step SDLC Model

The seven-step and four-step SDLC models have several similarities and differences in the application and characteristics as used in different enterprises. The seven-step and four-step models get conceptualized, designed, and validated so as to understand the complex behavior of particular entities. (Misra, & Harekrishna 2013) The features of the particular entities handled by each model may, however, be different. Another similarity is that the seven-step and four-step SDLC models have the elements of Planning, Analysis, Design, Implementation, and Support (PADIS).  (University of Maryland, Baltimore County, 2007) SDLC models are responsible for the execution of a project from initiation to deployment. The models are also similar in that each phase requires input and the outcome affect the next phase of the model.

The seven-step and four-step SDLC models are different in that they vary according to which phase the primary emphasis gets placed. The seven-step SDLC model is a longer process hence requires more time more resources that the four-step SDLC model. (Sharma, Sarkar, & Gupta, 2012) The four-step SDLC is more costly for smaller projects that the seven-step SDLC model and also requires more skill to undertake a risk assessment.

CONCLUSION

SDLC models are significant in systems development because they introduce levels of management authority that provide coordination, timely direction, review and approval of systems development projects. The management of an organization should adopt the appropriate SDLC model, so as to ensure the success of implementation of information systems.


REFERENCES

Babers, C. 2015 The Enterprise Architecture Sourcebook, Volume 1, Second Edition: Lulu.com

Khosrowpour, M. 1996 Information Technology Management, and Organizational Innovations: Proceedings of the 1996 Information Resources Management Association International Conference, Washington: Idea Group Inc (IGI) p.408

Liberto, J. R. 2003 Technology and Purpose: Data Systems and the Advancement of the Not-For-Profit Organization: iUniverse p. 34 - 36

Michigan Technological University, 2010 System Development Lifecycle; Retrieved from

https://www.security.mtu.edu/policies-procedures/SystemDevelopmentLifecycle.pdf

Misra, & Harekrishna 2013 Managing Enterprise Information Technology Acquisitions: Assessing Organizational Preparedness: Assessing Organizational Preparedness: IGI Global p. 256 -283

University of Maryland, Baltimore County, 2007 Presenting a Conceptual Model for the Systems Development Life Cycle: ProQuest p. 9

Sharma, S., Gupta, D. & Sarkar, D. 2012 Agile processes and methodologies: International journal of computer science and Engineering, 4(5), 892-898

Shelly, G. & Rosenblatt, H. J. 2011 Systems Analysis and Design, Edition 9: Cengage Learning

View items...
Wednesday, 05 April 2017 04:51

Explaining the reason why project fail

Written by

Explaining the reason why project fail

The reasons why the project fails includes lack of user involvement leading to a lack of user input.
Incomplete requirements and specifications lead to failure due to limited resources and utilities that need to be incorporated into the project.
Changing requirements and specifications resulting in a lack of particular focus affect the scope of the project thus contributing to project failure.
Lack of executive support contributes to minimal influence toward surveillance, monitoring and controlling of the project thus making the project to fail (www. project smart. co. up).
Technology incompetence is a factor that leads to lack of sensitive materials to make acceptable and satisfying project thus leading to failure
Unrealistic expectations makes the project either too huge or too small resulting in over budgeting or under budgeting making the project to fail (www. project smart. co. up)
Unclear objectives lead to lack of common goals causing the project team to have diversities thus contributing to failure of project
Unrealistic time frame affects the entire schedule of a project by making the to extend beyond the normally proposed completion duration. Unrealistic timing affects the project milestone thus causing the entire project to have no particular way of determining its progress (www. project smart. co. up).

How do project managers influence project outcome?
Project managers determine the strategies for implementing the best success criterion that leads to the full success of high quality and acceptable project. Project managers influence the success of a project through selection and recruiting team member who has the potential of taking meeting all project objectives (Ambadapudi & Shreenath 2014). Managers are responsible for developing project teams through promoting the culture of project development. He ensures that project team can effectively embrace different forms of adjustments as well as adapt to new changes in an organization. The manager is responsible for motivating the project team through controlling their emotional factors as well as ensuring they have the full potential of gaining emotional stability (Ambadapudi & Shreenath 2014). The manager must perform evaluation and validation processes through ensuring that the entire project has acquired right direction. He defines the project milestones and evaluates the progress of entire project development process.
Discuss your personal experience with failed projects.
Through various project development lifecycle, I have encountered two types of projects encountering a total failure. A project to develop a student management system failed as a result of encountering several changes due to lack of specific needs and specification. Initially, the project was being developed with a purpose of managing and controlling the registration process of students. However, the stakeholders demanded more functionality to be added to the project such as the student's hostel management system, finance and billing system as well as student portals. The new requirements changed the scope of the project making the project more complex and sophisticated to implement. The proposed budget became more costly three times than proposed budget. New resources such as programming experts were needed in the new project specifications due to lack of enough funds as well as an uncontrolled change of user specifications.
Conclusion
The project development process should have well-defined specifications, fully contribution and involvement of the users as well as ensuring that all resources required for a project are enough. Lack of such essential requirement can contribute to project failure.

References
Ambadapudi S., M. & Shreenath S. (2014). Does People Behavior Impact Projects? How? And What Do We Do About It?
The Standish Report – CHAOS retrieved from: https://www.projectsmart.co.uk/white-papers/chaos-report.pdf

Wednesday, 22 March 2017 13:41

DBA SQL tuning

Written by

DBA SQL tuning


Question 1: If a database application is experiencing performance problems due to poorly designed SQL, what are the performance tuning steps that can be done to improve the SQL execution/performance?
1. Identify high-impact SQL
The first step to towards the improvement of SQL execution is to rank the SQL statements according to the number of executions that will also form the tuning order. The "dba hist sql summary" table may be used to locate the SQL statements that are most used (Yagoub et al., 2008). Those SQL statements that are most executed should be tuned first.
2. Determine the execution plan
The second step towards the tuning of SQL is to the execution plans of the SQL statements identified in the previous step. Many hist third party tools for displaying the execution plans do exist in the market. One of the most useful utilities used to determine the SQL statements plan is the Oracle explains plan utility. The explain utility is used to request Oracle to parse the SQL statement and then display the classpath without executing the statement (Yagoub et al., 2008).
3. Tune the SQL statement
For the SQL statements that have a sub-optimal execution plan, there are two methods for tuning SQL statements. They include incorporating the SQL hints to the execution plan so as to modify it, rewriting the SQL statements with the global temporary tables, and then rewriting the SQL statement in the form of PL/SQL (Tow, 2009). A hint, in this case, is a directive that is included in the SQL statement to change the access path for an SQL query. There are cases where this can result in a performance improvement of up to 20 times the current performance. A call to a PL/SQL package can be used to replace SQL, and that PL/SQL package consists of stored procedures that are used to perform the query.
Question 2: What is an online backup?
An online backup involves the backing up of data on a remote data storage device having an Internet connection so that the backup can be accessed using a browser (Schmied & Thomas, 2004). Many of the online backup services offer Web-based administration console to help in accessing the data as well as monitoring then the health of the backups. The backed up data is encrypted and stored on the external devices of the data centers of the provider.
What steps are required for an online backup?
• The first step towards setting up an online backup is to install the online backup software and to carry out the necessary configurations on the software.
• The backup files are installed on the server before they can be transferred to the online backup software.
• The files to be backed up are selected and then the backup takes place
• Select how you would like to run the backup.
• Perform the testing of the backups to make sure that they can be restored when required and to confirm that there is a consistency between the original database and the backup.
Explain all reasons for using an online backup
Online backups offer organizations an excellent way for protecting their data. The online backups are safe, and they ensure that the organizational data is not compromised and that it is always available whenever it is in requirement (Toka, Amico & Michiardi, 2010). The online backup solutions help to reduce costs by eliminating the need for buying the costly backup hardware and software. Also, with an online backup, there is no need to purchase and maintain costly external tape drives of hard drives.
What is an offline backup?
An offline backup is a way of storing data away from the network so that it can be accessed even in the absence of a network connection. It is used as a safety precaution as it is available for update and it remains intact at the time it is copied to the offline media (Bhattacharya, 2002).
What steps are required for an offline backup?
1. Understand the backup environment:
Before creating an offline backup, a thorough assessment should be done and the inventory of the current environment such as then backup servers, storage and networking components, automated libraries and the backup media. The backup is entailed at determining if the available infrastructure is suitable for the backup, the criticality of the data, and the legal requirements concerning data backups among other essential factors.
2. Perform capacity planning
After the assessments accomplished, and the backup infrastructure is understood, the next step is to carry out capacity planning. It is aimed at identifying the storage requirements regarding space so as to determine the differences between the current storage infrastructure and the expected requirements.
3. Analyze the governing policies and procedures
The success of offline backup cannot be ensured unless there is a documentation of the policies as well as the operation procedures. The third step is where the internal, as well as the external customer requirements, are reviewed so as to make sure that backup and recovery will meet their needs.
4. Determine the resource constraints
In an ideal world, an organization has limited resources for accomplishing its business objectives. The step of determining the resources constraints will take into account the business constraints that an organization is facing including the physical infrastructure constraints, financial constraints, and personal constraints. That will determine which resources will be used and which ones will need to be acquired or reused.
5. Implement the Plan
Once the offline backup plan is completed and approved, it should then be implemented. A phased approach should be used to implement the plan. First and foremost the staffs will need to be hired, and those available will need to be trained or else an outsourcing vendor will be selected. The next thing will be to implement and test the new backup software tools.
Explain all reasons for using an offline backup
An offline backup is used to help in ensuring a fast backup and restore. “Even with a fast Internet connection, when there is a large volume of data backed up online, the restore will not be that fast. The offline backup will ensure a fast recovery of the data whenever required” (Schmied & Thomas, 2004). Another reason for using offline backup is to ensure accessibility of the backup. The local offline backups are within reach at your office, and one just needs to plug the backup media and start backing up or restore the data. The other reason that attracts companies to offline backups is that they want to have a backup that is safer from the cyber security breaches and other attacks that leverage the Internet connections (Bhattacharya, 2002). The final reason for doing an offline backup is a desire for mobility. The organizations want a backup that they can move around and archive them in other locations or to carry them for safety purposes.

References
Bhattacharya, S., Mohan, C., Brannon, K. W., Narang, I., Hsiao, H. I., & Subramanian, M. (2002, June). Coordinating backup/recovery and data consistency between database and file systems. In Proceedings of the 2002 ACM SIGMOD international conference on Management of data (pp. 500-511). ACM.
Schmied, W., & Thomas, O. (2004). Implementing and managing exchange server 2003. Indianapolis, Indiana: Que Certification.
Toka, L., Amico, M. D., & Michiardi, P. (2010, August). Online data backup: A peer-assisted approach. In Peer-to-Peer Computing (P2P), 2010 IEEE Tenth International Conference on (pp. 1-10). IEEE.
Tow, D. (2009). SQL Tuning. Sebastopol: O'Reilly Media, Inc.
Yagoub, K., Belknap, P., Dageville, B., Dias, K., Joshi, S., & Yu, H. (2008). Oracle's SQL Performance Analyzer. IEEE Data Eng. Bull., 31(1), 51-58.

Monday, 23 January 2017 18:13

Miscellaneous Errors

Written by
Grand Street Medical Associates
Student Name
Course Title
Instructor
Date Submitted
Miscellaneous Errors
    There was a time a large amount of patient’s protected health data and information was exposed through an unsecured FTP server. This was discovered by one Justin Shafer who proceeded to notify Grand Street Medical Associates (GSMA). GSMA went on to contact DataBreaches.net on March of the same year. It was estimated that more than 14,600 files were exposed which is more than 20GB of data. Each of the files skimmed by DataBreaches.net contained PHI on several patients. It appeared as the files comprised of an effort of scanning and digitizing patients’ paper records from December 2011. 
Figure 1: Some of the files exposed
    Most of the files exposed contained patients’ demographics. The files also contained PHI of unique patients. Additionally, for most of these patients whose files were exposed, there were questions whether they had recently visited the lab or had any bloodwork. The forms also required the patients to provide information such as name, marital status, and date of birth, age, gender, address, and occupation among other sensitive information. About half of 65 patients whose forms were exposed provided the requested information without thinking much into it. On the other hand, other patients provided insurance information according to the way they were requested. For most of the patients, there were also copies of the insurance cards as well as driver’s licenses. The matter became even worse because more than 14,000 files appeared on Google index while the other more than 6,300 files appeared on the Filemare index. Below is the Google index:
Figure 2: Google Index
References
http://siliconangle.com/blog/2016/03/22/medical-data-breach-exposes-patient-records/ 
http://www.databreaches.net/ny-treasure-trove-of-grand-street-medical-associates-patient-data-exposed-and-indexed/ 
Friday, 09 December 2016 18:29

LOGGING DATA INTO CLOUD USING A FREEDOM BOARD

Written by

LOGGING DATA INTO CLOUD USING A FREEDOM BOARD
Student Name
Course Title
Instructor
Date Submitted

Introduction
The mbed rapid prototyping environment and the platform are for the microcontrollers. The environment is a cloud-based IDE and the NXP LPC1768 development board. Over the last several years, the mbed platform has seen extensive growth and development. However, the hardware side of things has not had such growth. This was not good news since the matching development boards usually cost less. This could be one of the reasons why the mbed did not gain popularity like other rapid development platforms. Now there is another powerful board to be used alongside the mbed, the Freescale FRDM-KL25Z which is a move towards the right direction for the Freescale and mbed. The platform allows users to access dirt-cheap development boards and user-friendly IDE.
What is mbed?
mbed is an online development platform and the environment. mbed is also similar to cloud computing services like Google Docs and Zoho Office. However, mbed environment has some advantages and disadvantages. The main advantage is there is no need of installing software on the PC. As long as the user has a web browser and a USB port, they can start using mbed environment. In addition, the new libraries and the IDE updates are handled by the server. Therefore, the user does not have to worry about updating the mbed environment. The online environment can closely monitor while updating the MCU firmware when required. However, the environment is disadvantageous in that the user cannot work with their code off-line. Additionally, it has privacy issues (Boxall, 2013).

Figure 1: mbed environment
It can be seen from the above diagram that the IDE is straight-forward. All the user’s projects can be retrieved from the left column while the editor in the main window, compiler, and other messages are in the bottom window. It also has an online support forum, an official library, and library database. It also has help files among many other components. Therefore, it has plenty of support. It writes code in C/C++, and it does not have any major challenges. When the code is being run, the online compiler creates a binary file which can be downloaded easily and subsequently copied to the hardware through the USB (Marao, 2015).

Freedom Board
A Freedom board is a cheap development board which is based on the Freescale ARM Cortex – M0+ MKL25Z128VLK4. It has the following features (Styger, 2014):
i. Easy access to the MCU I/O
ii. MKL25Z128VLK4 MCU – 48 MHz, 128 KB Flash, 16 KB SRAM, USB OTG (FS), 80LQFP
iii. It has Capacitive touch “slider” MMA8451Q accelerometer; tri-color LED
iv. It has a complex OpenSDA debug interface
v. It has a default mass storage device storage programming interface. Additionally, it does not require any tools for installation in evaluating demo apps
vi. Freedom board’s P&E Multilink interface provides the required run-control debugging as well as compatibility with the IDE tools
vii. Freedom board’s open-source data logging applications provide what can be said to be customer, partner, and development on the OpenSDA circuit.

Figure 2: Freedom Board
Most of the literature on the board, it is mentioned to be “Arduino compatible.” Being Arduino compatible is because of the layout of the GPIO pins. Therefore, if a user has a 3.3 V-compatible Arduino shield, they may be in a position to use it. However, the I/O pins are able only to sink or source a 3 mA so GPIO should be handled with care. However, as can be seen from the features, Freedom Board has an accelerometer as well as an RGB LED which can be used for various uses (Sunny IUT, 2015).
Getting Started
This explains the process through which a Freedom board is put into working with mbed as well as creating first program (Hello world). The requirements are a computer installed with any operating system (OS) with USB, connection to the Internet, and a web browser. Additionally, there is a need for a USB cable (mini-A to A) and lastly a Freedom board. Here is the procedure:
i. Ensure the Freedom board is there
ii. Download and install the required USB drivers for any operating systems preferably Windows and Linux
iii. Create a user account at mbed.org by strictly following the instructions given
iv. Plug in the Freedom board by use of USB socket labeled OpenSDA. After plugging the Freedom board, it is going to appear as a disk referred to as “bootloader.”
Among the following steps, plugging in the Freedom board, getting software, building and running, and creating are the most important. Choosing the software is selecting the development path. The user chooses between Kinetis Software Development Kit (SDK) + Integrated Development Environment (IDE) and ARM mbed Online Development Site (Styger, 2014).
Features of SDK+IDE
i. It has the ultimate flexibility of the software
ii. It has examples of application and project files
iii. It has a true support of debugging through the SWD and JTAG
iv. It has all the peripheral drivers with their source
Features of ARM mbed Online Development Site
i. It has an online compiler but lacks SWD, and/or JTAG debug
ii. It has heavily abstracted and simply built programming interface
iii. Although it is useful, its drivers are limited with a source
iv. It has examples submitted by the community
Build and Run SDK demonstrations on the FRDM-KL25Z
i. Exploring the SDK Example Code
The Kinetis SDK has an inbuilt long list of applications for demo as well as examples of drivers.
ii. Build, Run, and Debug the SDK examples
This is step-by-step instructions on the user can easily configure, build, and debug the demos for the toolchains easily supported by the SDK
Creating Applications for the FRDM-KL25Z
i. Getting the SDK Project Generator
This explains the creation of the project and making of a simple SDK-based application. Using the NXP, the users will be provided with intuitive, simple project generation utility thus allowing easy creation of custom projects according to the Kinetis SDK
ii. Running the SDK Project Generator
After the extraction of the ZIP file, the utility is opened by a simple click on the KSDK_Project_Generator executable for the computer’s operating system. Then the board used as a reference is selected.

Figure 3: KSDK Project Generator
Open the Project
The new project will be retrieved from <SDK_Install_Directory>/examples/frdmkl25z/user_apps. The project is opened on the toolchain
iv. Writing Code
Writing code is making a new project which is functional other than spinning in an infinite loop. The examples of the SDK have a board support package (BSP) to help in doing different things to the Freedom board. This includes macros and clear definition of the terms like LED, peripheral instances, and switches among others. Below is a LED blink made using the BSP macros
The main()function in the code’s main.c should be updated using the piece of code below:
volatile int delay;
//Configure board specific pin muxing
hardware_init();
//Initialize the UART terminal
dbg_uart_init();
PRINTF (“\r\nRunning the myProject project.\n”);
//Enable GPIO port for LED1
LED1_EN;
For (;;)
{
LED1_ON;
delay = 5000000;
while(delay--);
LED1_OFF;
delay = 5000000;
while(delay--); }
The above code is then uploaded to the Freedom board after the IDE is entered by clicking “Compiler”
Creating the Uploading Code
A simple program is created to ensure all is well. When the IDE is entered, it presents the user with “Guide to mbed Online Compiler.” The user then clicks “New” after which the program is given a name and click Ok. The user is then presented with a basic “hello world” program blinking the blue LED within the RGB module. The delays are then adjusted according to the likings of the users after he clicks “Compile” in the toolbar. Assuming everything has gone well, the web browser will present the user with a .bin file downloaded to the default download directory. The .bin file is then copied to the mbed drive and reset button is pressed on the Freedom board. The blue LED now starts blinking (Meikle, 2015).
Moving Forward
There are some examples of code demonstrating how accelerometer, RGB LED, and touch are used. The map below shows the pins on the Freedom board with regard to the mbed IDE

Figure 4: Freedom Board Pins
All the blue pins such as PTxx can easily be referenced in the code. For instance, pulsing PTA13 on and off after every second, the code below is used (Young, 2015):
include “mbed.h”
digitalOut pulsepin(PTA13);
int main() {
while(1){
pulsepin = 1;
wait(1);
pulsepin = 0;
wait(1);
}
}
The pin in the reference will be inserted within the DigitalOut assignment. Therefore, “pulsepin” refers to the PTA13.
CONCLUSION
The Freedom board offers users a very cheap way of getting into the programming and microcontrollers and finally into the cloud. Users should not be worried by the IDE or the revisions of firmware. Additionally, they should not be worried by the installation of the software on the locked-down computers or the fact that they might lose the files. The paper has shown that it is indeed to use Freedom boards to easily log into the cloud which enables the data to be accessed.

Works Cited
Boxall, J. (2013). mbed and the Freescale FRDM-KL25Z development board. Retrieved from Tronixstuff: http://tronixstuff.com/2013/03/07/mbed-and-the-freescale-frdm-kl25z- development-board/
IUT, S. (2015). Freescale freedom FRDM-K64F development platform. Retrieved from Element 14 Community: https://www.element14.com/community/roadTestReviews/1972/l/freescale-freedom- frdm-k64f-development-platform-review
Marao, B. (2015). Freedom beginners guide. Retrieved from Element 14 Community: https://www.element14.com/community/docs/DOC-68209/l/read-me-first-freedom- beginners-guide
Meikle, C. (2015). Freescale Freedom FRDM-K64F development platform. Retrieved from Element 14 Community: https://www.element14.com/community/roadTestReviews/1984/l/freescale-freedom- frdm-k64f-development-platform-review
Styger, E. (2014). Freedom board with Adafruit ultimate GPS data logger shield. Retrieved from DZone: https://dzone.com/articles/tutorial-freedom-board
Styger, E. (2014). IoT datalogger with ESP8266 Wi-Fi module and FRDM-KL25Z. Retrieved from MCU on Eclipse: https://mcuoneclipse.com/2014/12/14/tutorial-iot-datalogger- with-esp8266-wifi-module-and-frdm-kl25z/
Young, D. (2015). Create your own cloud server on the Raspberry Pi 2. Retrieved from Element 14 Community: https://www.element14.com/community/community/raspberry- pi/raspberrypi_projects/blog/2015/05/05/owncloud-v8-server-on-raspberry-pi-2-create- your-own-dropbox
https://developer.mbed.org/platforms/FRDM-K64F/#flash-a-project-binary
https://developer.mbed.org/platforms/IBMEthernetKit/

Tuesday, 29 November 2016 06:08

The Internet Connected Smart Washer/Dryer

Written by

The Internet Connected Smart Washer/Dryer

Student’s Name: NIKHIL KUMAR THUMMALA

Email Address: This email address is being protected from spambots. You need JavaScript enabled to view it.

Course Name

Course Title

Course Instructor

Date of Submission

Systems Analysis


Current existing washer-dryers have standard washing programs and software. They have limited configurations and settings that allow specific types of clothes such as cotton, human-made materials, and delicate fabrics. The current washer-drier has a limited range of different types of temperatures. Current systems are modeled with some additional options such as pre-wash, extra rinse, and spin. Washer-dryer have special cycles which are designed for specific scenarios manual systems such as hand-washing, curtains, sportswear and even trainers. The current washer-dryers have a filter designed for catching fluff which gets whipped up when drying. However, they have limited capabilities in filtering bits of debris. They are highly manuals and add tasks such as the need to be cleaned regularly. They are noisy and may cause pain on the back since some models are often close to the ground making them fiddly to open and operate. The system has problems associated with dry cleaning due to conflicting systems and configurations. The current washer dryers, when implemented, may either need to have an additional hook-up system, tool, or process behind the dryer. However, a manual configuration of adding water is usually required via a dispenser on the machine.


The condenser system in a washer-dryer cools most of moist and hot air inside the machine. However, the system has the poor disposal of produced water. The water is poured down the drain. The system must use water and cannot operate without the sustainable amount of water. The system cleaning process uses a lot of water. It fails to save water and conserve scarce resources. The lack environments friendly characteristics make the washer-dryers a poor choice for individuals whose houses have a water meter. Current energy ratings configured in Washer-dryers are rated at a range of A+ and G for energy efficiency. However, the A+ being the most efficient lacks to be compatible with machine temperatures due to ever heating (Deng, et al., 2015). Sometimes temperature ratings officially may go down to G levels; however, in real practice current machines cannot be found which are below C in the shops. The system has incorporated electric translation equipment that uses twice the energy making it poorer compared to an ordinary household electric current. Existing washer dryer system run on a 240-volt current that leads to heating up if coils. The system requires a special 240-volt outlet ports within laundry area.


Proposed Washer-Driers

The washer dryers should be installed with sensor dry which is a moisture sensor programmed with the high intelligent capability to detect the degree of wetness your laundry is automatically. It is capable of adjusting the drying time automatically according to user preferences after defining the level of damp or completely dry. The new capability is targeted to save time and money as well as energy costs. The improvement will prevent over drying thus maintaining a high level of extended life of clothes (Asare-Bediako, et al., 2012).


The proposed system is required to perform effective and efficient eco cycle which significantly enables continuous decreasing of energy through the use of accurate monitoring clothes’ dryness using automated programming systems that trigger eco cycle. The washer dryers shall be modeled with a monitor on their console. The platform shall facilitate easy displaying if the energy use and efficiency of various drying cycles. The embedded monitor is intended to enable easy interface fire users to work with the system. A dryer fitted with an eco-cycle will be capable of using less energy. The high performance shall be guaranteed by pairing it with a matching washer. The system shall have a high responsive operator system used for recycling various by-products by covering them into energy(Asare-Bediako, et al., 2012).


The proposed washer dryer has an express dry software system that has the capability of controlling the dry cycle using large integrated blowers. It shall facilitate increased airflow to enhance laundry dries perform more effectively and efficiently. The system implements dry Express software to regulate streams reaching clothes. The mechanism has a complete configuration for facilitating well-advanced traits for removing stains removal as well as removing greases (Asare-Bediako, et al., 2012).


The project is focused on setting appropriate processor that can accommodate steam and control steam cycles through system registers. The systems shall have the capability of refreshing the outfit, enabling relaxing of wrinkles, and removing odors. The settings and configurations set shall involve the use of a small amount of water. This shall reduce the amount of water used thus enabling conservation of water. The water supplied will be in the form of sprays fed into the dryer drum after set level of minute’s interaction with heat. During development and installation of steam, controllers shall be set to trigger periodically during tumbling session. The arrangement will facilitate effective rearranging fluffing of the load as well as keeping wrinkles from forming. Washer dryer shall adjust settings and configurations automatically based on the number of garments feed in the dryer (Asare-Bediako, et al., 2012).


The system will be installed with highly delicate systems that facilitate effective and efficient coordination of cycles using an ultra-low temperature. The system will have high intellectual potential using analytical sensors to enable safely and gently drying of lightweight garments. The sensor mechanisms shall have the capability to identify loosely woven fabrics and perform best operations in implementing best strategies in cleaning and drying them. The new system guarantees that clothes will last longer as well as keep their color longer due to the use of the correct temperatures (Asare-Bediako, et al., 2012).


The system is intended to apply an effective and efficient application of sanitation that shall facilitate eradication of pathogen causing. The future washer and dryer shall have methods to kill bacteria and germs that find their way into fabrics. Dryer will be equipped with a sanitizing cycle that helps in providing relief to children, youths, and adults with frequent allergies. The system promotes a high level of health by ensuring that all forms of disease-causing organisms will be denatured by use of high heat or steam. The sanitization process shall be promoted through the use of processors coordination. The proposed system shall guarantee that health conditions are guaranteed so that they are easily washed. The process of sanitation cycle installed in systems shall ensure it shall be capable of eliminating up to 99.9% of common household pathogens and bacteria (Asare-Bediako, et al., 2012).


The Implementation of the Proposed System

The proposed system shall be developed using ARM Cortex-M33 processor which is the most configurable with washer and dryer using Cortex-M processors. The project is full featured with microcontrollers which support various classes of processors. The smart washer and dryer shall be based on the latest version of ARMv8-M architecture developed by ARM TrustZon vendors. The system shall be emphasized due to high security and improved digital signal processing. Through integrating Cortex-M33 smart washer will provide all of the security benefits as guaranteed by TrustZone security isolation (Goodwin, et al., 2013).Cotex-M33 guarantees full optimization of deterministic, trough facilitating application of real-time microcontroller classes of processor operations.


The proposed smart washer and dryer is intended to use ARM Cortex-M33 processor which supports a large number of flexible configuration and settings of options. Through new version of Cortex-M33 features such as the deployment of a wider range of diverse applications shall be facilitated. The systems implemented shall include well-connected systems of enabled Bluetooth IoT node. The systems shall, therefore, be easy and capable of using a dedicated co-processor interface. ARM Cortex-M33 processor supports Wi-Fi-enabled operations. The added Wi-Fi features allow users to implement IoT products which include issues such as Nest to control the washer and dryer. Use of Wi-Fi enabled applications allows running of the machine at lower energy prices. (Jing, et al., 2014) The system shall improve and extend the capability of the processors by use of frequently used applications to compute intensive operations. The ARM Cortex-M33 processor is intended to improve capabilities such as of delivering an optimum balance of high-level performance, effective energy efficiency, as well as security and productivity.


Significance of Proposed System

ARM Cortex-M33 processor provides a security foundation, that guarantee offering of isolation to ensure effective protection of valuable configuration and settings using TrustZone technology. The new smart washer and dryer shall utilize an effective extension of the processor operations that utilize tightly coupled co-processor interface. ARM Cortex-M33 processor is the most effective and efficient application architecture that simplify the smart washer and dryer design. The system enhances the best utilization of software development to facilitate the use of digital signals. The current system enhances utilization of control systems which has integrated digital signal processing (DSP) for allowing users to implement and feed instructions. The ARM Cortex-M33 processor system has best configurations for controlling temperatures. Smart washer and dryer use single precision floating point to compute an accurate measure of mathematical operations. The applied methods include up constant multiplier of 10x Farokhi, Cantoni, & 2015 5th Australian Control Conference, 2015). The system ensures use of equivalent integers as well as software libraries with the optional floating point units. ARM Cortex-M33 processor shall facilitate achieving most of the industrial system conservation of energy through the effective use of the integrated software such as controlled sleep modes, variable extensive clock gating as well as optional state retention.


Figure 1: System Integration Architecture for ARM Cortex-M33 processor

System Properties and Requirements

Architecture

ARMv8-M Mainline (Harvard)

ISA Support

A64 is a new 32-bit fixed length instruction set to support the AArch64 execution state

Pipeline

Three-stage

TrustZone

Optional TrustZone for ARMv8-M

Co-processor interface

smart washer and drier require optional dedicated co-processor bus interface for up to 8 co-processor units for custom compute

DSP Extensions

Optional Digital Signal Processing (DSP) and Single instruction, multiple data SIMD instructions
system requires extension with Single cycle 16/32-bit MAC
system requires extension with Single cycle dual 16-bit MAC
system requires extension with 8/16-bit SIMD arithmetic

Floating Point Unit

Smart washer and drier require operating single precision floating point unit
Institute of Electrical and Electronics Engineers (IEEE) 754 compliant

Memory Protection

Smart washer and drier require operating Memory Protection Unit (MPU) with up to 16 regions per security state

Interrupts

Non-maskable Interrupt (NMI) and up to 480 physical interrupts with 8 to 256 priority levels

Wake-up Interrupt Controller

Smart washer and drier require operating Wake-up Interrupt Controller for waking up the processor from state retention power gating or when all clocks are stopped

Sleep Modes

Smart washer and drier will be Integrated with wait for event (WFE) and wait for interrupt (WFI) instructions with Sleep On Exit functionality.


Debug

Smart washer and drier require operating Joint Test Action Group (JTAG) & Serial-Wire Debug Ports. Up to 8 Breakpoints and 4 Watchpoints.

Trace

Smart washer and drier require operating Instruction Trace (ETM), Micro Trace Buffer (MTB), Data Trace (DWT), and Instrumentation Trace (ITM)

Table 1: System Properties and Requirements

Smart Washer and Dryer A system Block Diagram

The smart washer and dryer are based on the ARM Cortex-M33 processor which is enhanced with different types of functionalities that enable smart washer, and dryer peripherals to perform cleaning functionalities. The smart washer and dryer systems are configured and installed with industrial interfaces such as EtherCAT and PROFIBUS. The Smart washer and drier support high-level operating systems (HLOS). It can be integrated with devices running Linux and Android. The ARM Cortex-M33 processor contains various subsystems as show and a brief description about the smart washer and dryer. The smart washer and dryer have microprocessor unit (MPU) subsystem which is based on the ARM Cortex-M33 processor and the PowerVR SGX (Farokhi, Cantoni, & 2015 5th Australian Control Conference, 2015). The smart washer and dryer systems consist of PRU-ICSS which has a direct connection to the ARM Cortex-M33 processor this allows independent operation as well as high-level clocking for greater efficiency and flexibility. Through the configuration of the PRU-ICSS adding peripheral interfaces is easily enabled. It facilitates system to operate under real real-time protocols such as EtherCAT, PROFIBUS, Ethernet Powerlink, PROFINET, EtherNet/IP, Sercos, and others. The ARM Cortex-M33 processor, allow users to perform the programming of the PRU-ICSS, to work effectively along with its access to pins during washing events. Smart washer and dryer have microprocessors have system-on-chip (SoC) resources that provide flexibility in facilitating implementation of fast, real-time responses. Smart washer and dryer have microprocessor which is specialized in handling clothes operations (Xia, et al., 2015). Other functionalities allow customizing of peripheral interfaces. The systems can easily perform offloading tasks from the other microprocessors using cores of SoC.


Figure 2: A system block diagram showing component interconnect

Marketing Data and Information

Smart washer and dryer use up to 1-GHz Sitara ARM Cortex-M33 processor of 32-Bit RISC Processor with the following features

1. NEON Single Instruction Multiple data (SIMD) Core processor

2. 32KB of L1 Instruction as well as 32KB of Data Cache with Single-Error Detection (Parity)

3. 256KB of L2 Cache With Error Correcting Code (ECC)

4. 176KB of On-Chip Boot Read Only Memory (ROM)

5. 64KB of Dedicated Random Access Memory (RAM)

6. Emulation and Debug using JTAG

7. Interrupt Controller that can handle up to 128 Interrupt requests

8. On-Chip Memory which is shared on L3 Random Access Memory (RAM)

9. 64KB of General-Purpose Registers On-Chip Memory Controller (OCMC) Random Access Memory (RAM)

10. Accessible to all Masters

11. Supports Retention for Fast Wakeup

Features of External Memory Interfaces (EMIF) as applied in Smart washer and dryer

a. Memory uses mDDR(LPDDR), DDR2, DDR3, DDR3L controller with the following specifications

b. mDDR operating at 200-MHz Clock to 400-MHz Data Rate

c. DDR2 operating at 266-MHz Clock to 532-MHz Data Rated. DDR3 operating at 400-MHz Clock to 800-MHz Data Rate

e. DDR3L operating at 400-MHz Clock to 800-MHz Data Rate

f. Connection bus is 16-Bit Data Bus with 1GB of total addressable space that can support One x16 or Two x8 Memory Devices

General-Purpose Registers Memory Controller (GPMC)

Smart washer and dryer have a Flexible 8-Bit to 16-Bit Asynchronous Memory Interface With up to Seven Chip that programmed to use NAND, NOR, Muxed-NOR, SRAM processor makes use of BCH Code that Supports 4-, 8-, or 16-Bit ECC (Farokhi, Cantoni, & 2015 5th Australian Control Conference, 2015). It is configured with Hamming Code that Supports 1-Bit ECC

Features of Error Locator Module (ELM) applied in the Smart washer and dryer system

It is used in conjunction with the GPMC that are capable of locating Addresses of data and information errors from syndrome polynomials generated using a BCH Algorithm. It is designed to supports 4-, 8-, as well as 16-Bit in every 512-Byte.

Programming technology applied on block error location is based on BCH

Algorithms that supports Programmable Real-Time Unit Subsystem as well as Industrial Communication Subsystem (PRU-ICSS). The programming languages applied Supports Protocols such as PROFINET, EtherCAT, PROFIBUS, EtherNet/IP among others (Goodwin, et al., 2013). The processor is fitted with Programmable Real-Time Units (PRUs) with the following features

1. 32-Bit Load Storage component with RISC Processor which is capable of running at 200 MHz

2. 8KB of Instruction set which uses a RAM With single-Error detection parity

3. 8KB of Data using Random Access Memory (RAM) with single-Error Detection (Parity)

The processor runs a single-Cycle that enhances the use of 32-Bit Multiplier With 64-Bit Accumulator. It has enhanced GPIO Modules that provides a shift in and shift out support services with modified parallel latch on external signal systems. The system uses 12KB of Shared Random Access Memory (RAM) with a single-Error detection parity bit. The system is fitted with about three 120-Byte registers which are accessible by all PRU. The configurations made include interrupt controller (INTC) specifically for handling system input events (Wang, et al., 2015). It has a local interconnect bus that is dedicated to connecting internal and external masters that rely on other resources embedded inside the PRU-ICSS

The Peripherals integrated inside the PRU-ICSS

a. One UART Port With heat flow control pins supports up to 12 Mbps

b. It has one Enhanced Capture (eCAP) Module for facilitating camera capturing

c. It uses two MII Ethernet Ports that Support is dedicated to supporting industrial Ethernet which includes EtherCAT

d. It is fitted with One MDIO Port

Figure 3: block schematic diagram of smart washer and drier controller

Retrieved from: http://asic-soc.blogspot.co.ke/2007/12/embedded-system-for-automatic-washing.html

Power Control

Energy and power are controlled using Power Reset, and Clock Management (PRCM) Module. It is used to control the smart washer and dryer power consumption through regulating entry and exit of stand-By and deep-Sleep modes. PRCM is responsible for determining sleep sequencing using power or domain switching such as wake-Up sequencing, off Sequencing, and Power Domain switch-On

sequencing. The amount of power used in the system is determined by two none switchable Power Domains that works on Real Time Clock (RTC), and Wake-Up Logic (WAKEUP). The amount of power released on the system is determined by three switchable power domains which include MPU subsystem (MPU), SGX530 (GFX), and Peripheral systems infrastructure (PER) (Goodwin, et al., 2013). The power control devices implement Smart Reflex Class 2B for core voltage scaling. They sense and trigger power supply based on die temperature, heating process variation, as well as performance indicators for adaptive voltage scaling (AVS). ARM Cortex-M33 processor used to run smart washer and dryer has been programmed to facilitate dynamic voltage frequency scaling (DVFS) using Real-Time Clock (RTC). It can be applied to specify real-Time Date regarding day, month, year, or day of a week to control energy consumption. Users can specify the time regarding hours, minutes, and second information.

System Clocks

ARM Cortex-M33 processor used to run the smart washer and dryer run between 15- to 35-MHz using high-Frequency oscillators. It is designed to generate a reference clock cycle for various internal systems and peripheral clocks. It has been to automatically run individual clocking mechanism that enables and disable energy control for subsystems. It regulates the amount of energy supplied through peripherals to facilitate reduced amount of power consumption.

ARM Cortex-M33 processor used to run smart washer and dryer has Internal 32.768-kHz oscillator, RTC logic and 1.1-V Internal LDO used for controlling independent power using RTC_PWRONRSTn input. The system is configured with dedicated input pin called EXT_WAKEUP for controlling and managing external wake events. The system has been installed with programmable alarm systems used to generate automated internal and external interrupts to the PRCM during wake up or direct to ARM Cortex-M33 processor for abnormal event Notification (Farokhi, Cantoni, & 2015 5th Australian Control Conference, 2015). Users can perform custom programming using for automated alarm to facilitate external Output using PMIC_POWER_EN which enables the Power Management IC module to restore Non-RTC power domains

Peripherals for Power Control and Management

1. Use of up to three USB 2.0 High-Speed OTG input and output power ports With Integrated PHY

2. Use of up to three industrial Gigabit Ethernet MACs ranging at 10, 100, and 1000 Mbps

3. Integrated Switch

4. MAC systems Supports MII, RGMII, RMII, and MDIO Interfaces

5. Configured Ethernet MACs with switching systems that enhance operating at independent of other functions.

6. Use of integrated IEEE 1588v2 with a precision of specific time protocol (PTP)

7. Use of Enhanced Controller Area Network (CAN) Ports that supports CAN that gives 2 Parts A and B voltage supply that can transmit as well as receive clocks beyond 50 MHz

Programming languages used

ARM Cortex-M33 processor used to run smart washer and dryer has been developed using tile-Based Architecture to deliver up to 20 Million Polygons per Second. The architecture implements Universal Scalable Shader Engine (USSE) is the main development engine for a multithreaded engine incorporating pixel as well as vertex shader functionality. The programming languages applied include advanced Java and JavaScript to design shader features. The platform for developing ARM Cortex-M33 processor used to run smart washer and dryer includes Microsoft framework VS3.0, PS3.0, and OGL2.0. CSS has been applied for developing industry standard API that supports direct3D mobile devices, OGL-ES 1.1 and 2.0, OpenVG 1.0, as well as OpenMAX (Farokhi, Cantoni, & 2015 5th Australian Control Conference, 2015). The Visual Basic programming language has been used to program Fine-Grained Task Switching module, load balancing, and power management platforms. It has also facilitated the development of advanced geometry of DMA-Driven Operations such as minimum CPU Interaction.CSS has been implemented to facilitate programming of a high-quality image for facilitating anti-aliasing of fully virtualized graphics in the memory addressing for effective OS operation in Unified memory architecture.

Future improvements of ARM Cortex-M33 processor on smart washer and dryer

The improved system shall be expected to run 32 Bit, 64 Bit, and 128-bit eCAP Modules. The modifications will be configured to capture inputs using three auxiliary PWM outputs. The system shall support above Six UARTs (Ministerrådet, 2007). All UARTs shall support IrDA and CIR Modes, RTS and CTS heat flow control. The UART1 architecture also supports full modem control for both master and slave serial interfaces. The improved system is intended to support the following

1. Up to four Chip Selects

2. Up to 98 MHz

3. Up to Three MMC, SD, SDIO Ports

4. 1-, 4- and 8-Bit SD, MMC, SDIO Modes

5. MMCSD0 will have a dedicated power rail running 1.8-V and 3.3-V

6. Facilitate 48-MHz Data Transfer Rate

7. Supports Card Detection and Write Protection

8. Future system shall Comply With MMC4.3, SD, SDIO 2.0 features, and specifications

Future system shall enable easy device Identification through integration of electrical fuse farm (FuseFarm) which shall include some Bits being factory programmable to support features such as

1. Unique production ID

2. Device Part Number (Unique JTAG ID)

3. Device Revision code Readable by host ARM

Future ARM Cortex-M33 processor used to run smart washer and dryer shall have modified debug interface to support features such as JTAG and cJTAG for ARM Cortex-M33 processor, PRCM, and PRU-ICSS Debug added supports shall include device boundary scanning with IEEE 1500 features

Future ARM Cortex-M33 processor used to run smart washer and dryer will have an On-Chip Enhanced DMA Controller (EDMA) with six third-Party Transfer Controllers (TPTCs) and three Third-Party Channel Controller (TPCC), which shall support up to 128 Programmable Logical Channels and 16 QDMA Channels (Ministerrådet, 2007). EDMA will be used for facilitating easy transfers to and from On-Chip Memories and Transfers to and from External Storage such as EMIF, GPMC, Slave Peripherals.

The future ARM Cortex-M33 processor used to run smart washer and dryer shall have Inter-Processor Communication (IPC) interface that Integrates Hardware-Based Mailbox for IPC as well as connecting spinlock for Process synchronization between PRCM, and PRU-ICSS the system will (Ministerrådet, 2007). The Spinlock will have 128 Software-Assigned Lock Registers that shall support:

1. Mailbox Registers for Generating Interrupts

2. Three Initiators such as PRCM, PRU0, PRU1

Future ARM Cortex-M33 processor used to run smart washer and dryer shall have features to support security such as Crypto Hardware Accelerators using AES, SHA, RNG. They shall have secure boot and boot modes. The Boot Mode shall be selected through boot configuration pins or the interface LCD Latched on the rising edge of the smart washer and dryer PWRONRSTn reset input pin. They will be in the following packages

1. 298-Pin S-PBGA-N298 with channel package of (ZCE Suffix), 0.65-mm Ball Pitch

2. 324-Pin S-PBGA-N324 Package with (ZCZ Suffix), 0.80-mm Ball Pitch

Figure 4: Future changes and modification of smart washer and drier

Retrieved from: http://www.hitachi-sales.ae/eng/featureshighlight/washingmachines.html

Cost of Implementing Smart Washer and Drier

item description

Estimated expenses

Field research

$ 500.00

Consultation fees

$ 300.00

Operations planning

$ 200.00

Conducting internet searching expenses

$ 200.00

Facilitating Interviews

$ 200.00

Monitoring and managing development process

$ 1000.00

requirements

$ 3000.00

compact disk

$ 200.00

Installations

$ 1000.00

Documentation and deliverables

$ 1000.00

Miscellaneous

$ 200.00

total

$ 36000.00

Estimated Time for Constructing of Smart Washer and Drier

January

February

March

April

May

June

July

August

September

November

Planning and business analysis of the Smart Washer and Drier

Feasibility study scope definition and budget defining of Smart Washer and Drier

The designing and implementation of Smart Washer and Drier

The testing installation and documentation of Smart Washer and Drier

References

Asare-Bediako, B., Ribeiro, P. F., Kling, W. L., & 2012 3rd IEEE PES Innovative Smart Grid Technologies Europe (ISGT Europe). (2012). Integrated energy optimization with smart home energy management systems. 1-8.

Deng, R., Yang, Z., Chow, M.-Y., & Chen, J. (2015). A Survey on Demand Response in Smart Grids: Mathematical Models and Approaches. Ieee Transactions on Industrial Informatics, 11, 3, 570-582.

Farokhi, F., Cantoni, M., & 2015 5th Australian Control Conference (AUCC). (2015). Distributed negotiation for scheduling smart appliances. 327-330.

Goodwin, S., Dykes, J., Jones, S., Dillingham, I., Dove, G., Duffy, A., Kachkaev, A., ... Wood, J. (2013). Creative User-Centered Visualization Design for Energy Analysts and Modelers. Ieee Transactions on Visualization and Computer Graphics, 19, 12, 2516-2525.

Jing, Z., Taeho, J., Yu, W., Xiangyang, L., & IEEE INFOCOM 2014 - IEEE Conference on Computer Communications. (2014). Achieving differential privacy of data disclosure in the smart grid. 504-512.

Ministerrådet, N. (2007). Impact of energy labelling on household appliances. Copenhagen: Nordiska ministerrådets förlag.

Wang, C., Zhou, Y., Wu, J., Wang, J., Zhang, Y., & Wang, D. (2015). Robust-Index Method for Household Load Scheduling Considering Uncertainties of Customer Behavior. Ieee Transactions on Smart Grid, 6, 4, 1806-1818.

Xia, L., Alpcan, T., Mareels, I., Brazil, M., de, H. J., Thomas, D. A., & 2015 5th Australian Control Conference (AUCC). (2015). Modelling voltage-demand relationship on power distribution grid for distributed demand management. 200-205.

https://www.arm.com/products/processors/cortex-m/cortex-m33-processor.php

https://developer.arm.com/products/processors/cortex-m/cortex-m33

http://www.which.co.uk/reviews/washer-dryers/article/washer-dryer-features-explained

https://www.lowes.com/projects/kitchen-and-dining/dryer-buying-guide/project

http://www.warnersstellian.com/dryer-buying-guide/

http://www.digitaltrends.com/home/washer-dryer-buying-guide/

Saturday, 05 November 2016 06:05

LOGGING DATA INTO CLOUD USING A FREEDOM BOARD

Written by

LOGGING DATA INTO CLOUD USING A FREEDOM BOARD

Student Name

Course Title

Instructor

Date Submitted

Introduction

            The mbed rapid prototyping environment and the platform are for the microcontrollers. The environment is a cloud-based IDE and the NXP LPC1768 development board. Over the last several years, the mbed platform has seen extensive growth and development. However, the hardware side of things has not had such growth. This was not good news since the matching development boards usually cost less. This could be one of the reasons why the mbed did not gain popularity like other rapid development platforms. Now there is another powerful board to be used alongside the mbed, the Freescale FRDM-KL25Z which is a move towards the right direction for the Freescale and mbed. The platform allows users to access dirt-cheap development boards and user-friendly IDE.

What is mbed?

            mbed is an online development platform and the environment. mbed is also similar to cloud computing services like Google Docs and Zoho Office. However, mbed environment has some advantages and disadvantages. The main advantage is there is no need of installing software on the PC. As long as the user has a web browser and a USB port, they can start using mbed environment. In addition, the new libraries and the IDE updates are handled by the server. Therefore, the user does not have to worry about updating the mbed environment. The online environment can closely monitor while updating the MCU firmware when required. However, the environment is disadvantageous in that the user cannot work with their code off-line. Additionally, it has privacy issues (Boxall, 2013).

Figure 1: mbed environment

              It can be seen from the above diagram that the IDE is straight-forward. All the user’s projects can be retrieved from the left column while the editor in the main window, compiler, and other messages are in the bottom window. It also has an online support forum, an official library, and library database. It also has help files among many other components. Therefore, it has plenty of support. It writes code in C/C++, and it does not have any major challenges. When the code is being run, the online compiler creates a binary file which can be downloaded easily and subsequently copied to the hardware through the USB (Marao, 2015).

Freedom Board

            A Freedom board is a cheap development board which is based on the Freescale ARM Cortex – M0+ MKL25Z128VLK4. It has the following features (Styger, 2014):

  1. Easy access to the MCU I/O
  2. MKL25Z128VLK4 MCU – 48 MHz, 128 KB Flash, 16 KB SRAM, USB OTG (FS), 80LQFP
  3. It has Capacitive touch “slider” MMA8451Q accelerometer; tri-color LED
  4. It has a complex OpenSDA debug interface
  5. It has a default mass storage device storage programming interface. Additionally, it does not require any tools for installation in evaluating demo apps
  6. Freedom board’s P&E Multilink interface provides the required run-control debugging as well as compatibility with the IDE tools
  7. Freedom board’s open-source data logging applications provide what can be said to be customer, partner, and development on the OpenSDA circuit.

Figure 2: Freedom Board

            Most of the literature on the board, it is mentioned to be “Arduino compatible.” Being Arduino compatible is because of the layout of the GPIO pins. Therefore, if a user has a 3.3 V-compatible Arduino shield, they may be in a position to use it. However, the I/O pins are able only to sink or source a 3 mA so GPIO should be handled with care. However, as can be seen from the features, Freedom Board has an accelerometer as well as an RGB LED which can be used for various uses (Sunny IUT, 2015).

Getting Started

            This explains the process through which a Freedom board is put into working with mbed as well as creating first program (Hello world). The requirements are a computer installed with any operating system (OS) with USB, connection to the Internet, and a web browser. Additionally, there is a need for a USB cable (mini-A to A) and lastly a Freedom board. Here is the procedure:

  1. Ensure the Freedom board is there
  2. Download and install the required USB drivers for any operating systems preferably Windows and Linux
  3. Create a user account at mbed.org by strictly following the instructions given
  4. Plug in the Freedom board by use of USB socket labeled OpenSDA. After plugging the Freedom board, it is going to appear as a disk referred to as “bootloader.”

            Among the following steps, plugging in the Freedom board, getting software, building and running, and creating are the most important. Choosing the software is selecting the development path. The user chooses between Kinetis Software Development Kit (SDK) + Integrated Development Environment (IDE) and ARM mbed Online Development Site (Styger, 2014).

Features of SDK+IDE

  1. It has the ultimate flexibility of the software
  2. It has examples of application and project files
  3. It has a true support of debugging through the SWD and JTAG
  4. It has all the peripheral drivers with their source

Features of ARM mbed Online Development Site

  1. It has an online compiler but lacks SWD, and/or JTAG debug
  2. It has heavily abstracted and simply built programming interface
  3. Although it is useful, its drivers are limited with a source
  4. It has examples submitted by the community

Build and Run SDK demonstrations on the FRDM-KL25Z

  1. Exploring the SDK Example Code

The Kinetis SDK has an inbuilt long list of applications for demo as well as examples of drivers.

  1. Build, Run, and Debug the SDK examples

This is step-by-step instructions on the user can easily configure, build, and debug the demos for the toolchains easily supported by the SDK

Creating Applications for the FRDM-KL25Z

  1. Getting the SDK Project Generator

This explains the creation of the project and making of a simple SDK-based application. Using the NXP, the users will be provided with intuitive, simple project generation utility thus allowing easy creation of custom projects according to the Kinetis SDK

  1. Running the SDK Project Generator

After the extraction of the ZIP file, the utility is opened by a simple click on the KSDK_Project_Generator executable for the computer’s operating system. Then the board used as a reference is selected.

Figure 3: KSDK Project Generator

Open the Project

The new project will be retrieved from <SDK_Install_Directory>/examples/frdmkl25z/user_apps. The project is opened on the toolchain

iv. Writing Code

            Writing code is making a new project which is functional other than spinning in an infinite loop. The examples of the SDK have a board support package (BSP) to help in doing different things to the Freedom board. This includes macros and clear definition of the terms like LED, peripheral instances, and switches among others. Below is a LED blink made using the BSP macros

The main()function in the code’s main.c should be updated using the piece of code below:

volatile int delay;

//Configure board specific pin muxing

hardware_init();

//Initialize the UART terminal

dbg_uart_init();

PRINTF (“\r\nRunning the myProject project.\n”);

//Enable GPIO port for LED1

LED1_EN;

For (;;)

{

LED1_ON;

delay = 5000000;

while(delay--);

LED1_OFF;

delay = 5000000;

while(delay--); }

The above code is then uploaded to the Freedom board after the IDE is entered by clicking “Compiler”

Creating the Uploading Code

            A simple program is created to ensure all is well. When the IDE is entered, it presents the user with “Guide to mbed Online Compiler.” The user then clicks “New” after which the program is given a name and click Ok. The user is then presented with a basic “hello world” program blinking the blue LED within the RGB module. The delays are then adjusted according to the likings of the users after he clicks “Compile” in the toolbar. Assuming everything has gone well, the web browser will present the user with a .bin file downloaded to the default download directory. The .bin file is then copied to the mbed drive and reset button is pressed on the Freedom board. The blue LED now starts blinking (Meikle, 2015).

Moving Forward

There are some examples of code demonstrating how accelerometer, RGB LED, and touch are used. The map below shows the pins on the Freedom board with regard to the mbed IDE

Figure 4: Freedom Board Pins

All the blue pins such as PTxx can easily be referenced in the code. For instance, pulsing PTA13 on and off after every second, the code below is used (Young, 2015):

include “mbed.h”

digitalOut pulsepin(PTA13);

int main() {

while(1){

pulsepin = 1;

wait(1);

pulsepin = 0;

wait(1);

}

}

The pin in the reference will be inserted within the DigitalOut assignment. Therefore, “pulsepin” refers to the PTA13.

CONCLUSION

            The Freedom board offers users a very cheap way of getting into the programming and microcontrollers and finally into the cloud. Users should not be worried by the IDE or the revisions of firmware. Additionally, they should not be worried by the installation of the software on the locked-down computers or the fact that they might lose the files. The paper has shown that it is indeed to use Freedom boards to easily log into the cloud which enables the data to be accessed.

Works Cited

Boxall, J. (2013). mbed and the Freescale FRDM-KL25Z development board. Retrieved from       Tronixstuff: http://tronixstuff.com/2013/03/07/mbed-and-the-freescale-frdm-kl25z-       development-board/

IUT, S. (2015). Freescale freedom FRDM-K64F development platform. Retrieved from Element 14 Community:             https://www.element14.com/community/roadTestReviews/1972/l/freescale-freedom-         frdm-k64f-development-platform-review

Marao, B. (2015). Freedom beginners guide. Retrieved from Element 14 Community:             https://www.element14.com/community/docs/DOC-68209/l/read-me-first-freedom-           beginners-guide

Meikle, C. (2015). Freescale Freedom FRDM-K64F development platform. Retrieved from          Element 14 Community:    https://www.element14.com/community/roadTestReviews/1984/l/freescale-freedom-         frdm-k64f-development-platform-review

Styger, E. (2014). Freedom board with Adafruit ultimate GPS data logger shield. Retrieved from DZone: https://dzone.com/articles/tutorial-freedom-board

Styger, E. (2014). IoT datalogger with ESP8266 Wi-Fi module and FRDM-KL25Z. Retrieved     from MCU on Eclipse: https://mcuoneclipse.com/2014/12/14/tutorial-iot-datalogger-          with-esp8266-wifi-module-and-frdm-kl25z/

Young, D. (2015). Create your own cloud server on the Raspberry Pi 2. Retrieved from Element 14 Community: https://www.element14.com/community/community/raspberry-            pi/raspberrypi_projects/blog/2015/05/05/owncloud-v8-server-on-raspberry-pi-2-create-       your-own-dropbox

https://developer.mbed.org/platforms/FRDM-K64F/#flash-a-project-binary

https://developer.mbed.org/platforms/IBMEthernetKit/

Saturday, 12 March 2016 02:51

Security and Privacy in Cloud Computing

Written by

Security and Privacy in Cloud Computing

Name: Vamshi Ravula

Date: 29th February 2016

Address: Vamshi Ravula
1266 Teaneck red, apt2A
TEANECK, NJ 07666
United States


 Shiva Sai

Table of Contents

Introduction.. 3

Executive Summary.. 3

Business Need and Current Situation.. 4

Project Overview... 4

Objectives. 4

Scope and Out of Scope. 5

Deliverables. 5

Stakeholders. 5

Resources. 6

Strategic Alignment.. 6

Environmental Analysis. 6

Market Readiness. 7

Alternatives (business, technical, and procurement)7

Business and Operational Impacts. 8

Risk Assessment and Analysis. 8

Feasibility Assessment and Analysis. 9

Implementation Strategy.. 9

Project Review and Approval Process. 9

Recommendations. 10

Project Sign-Off.. 10

References. 11

 

 

 

 

 

 

 

Introduction

Cloud computing is a paradigm in computing that allows third party service providers to offer a centralized pool of configurable resources to the end-users.  Those end-users are individuals and enterprises, and they make on-demand accesses to the resources in the cloud and utilize them to deploy their services in light of their ever-changing requirements.  In that way, the end-users have no need of implementing and managing their computing services, therefore, enabling fast deployment as well as minimum functional and management overheads (Pearson &  Benameur, 2010).  While cloud computing offers promising benefits to businesses and individuals, it also introduces security and privacy challenges.  Those issues may include how the data owners could be sure that their data have usage in an authorized manner, how the confidentiality of data is protected while at the same time allowing legitimate data access. Other issues may include how the trustworthiness of metering services can be assured so that the end-users are not unfairly charged.

Executive Summary

Cloud computing can bring some attributes that need special attention when it comes to trusting this system.  The trust of the cloud computing system is based on the data protection and privacy as well as the prevention techniques leveraged in it (Neisse et al., 2011).   There are numerous tools and as well as techniques for ensuring there is the protection of privacy and security in the cloud, but those tools have not been successful in removing the hurdle of trust that remains with cloud computing.  Security is a combo of many assets including information disclosure, prevention of unauthorized access, integrity, and availability.  The major issues in the cloud regarding security and information privacy include resource management, resource security, and resource monitoring (Kshetri, 2013).  The paper will help the businesses understand about the best technique that can be useful in ensuring privacy and confidentiality in the cloud computing paradigm.

Business Need and Current Situation

Currently, the cloud computing paradigm lacks the standard rules and regulations required for deploying applications in the cloud plus a lack of standardization in the cloud (Pearson &  Benameur, 2010).  There have been numerous novel techniques implemented in the cloud although those techniques have not been adequate in ensuring total security and privacy due to the dynamism of the cloud computing environment.  Enterprise needs to operate in an environment that is free of nay security and privacy issues, and so it is vital to come up with the mechanisms that can help them develop confidence and be assured of it by the cloud computing providers (Chen & Zhao, 2012).  The paper highlights the inherent issues regarding data security, privacy, management and governance in light of control in the cloud computing environment.  There will be the highlighting of the best available ways of the ensuring security and privacy in the cloud computing environment.  The paper proposes a data security and privacy framework for the cloud computing networks.

Project Overview

Objectives

  • To help the cloud computing providers have  a full knowledge of the privacy and security issues that pertains the cloud computing paradigm
  • To provide a security and privacy framework that can help the providers implement the required security and privacy in the cloud computing environment.
  • To help the providers understand the constraints that they can face in implementing the new framework

 Scope and Out of Scope

The scope of this paper is to highlight the security and privacy issues faced in the cloud computing environment, and then propose a new security and privacy framework that can help in addressing the currently issues effectively.  The paper does not include the addressing of other issues that are not related to the privacy and security of customer data in the cloud computing environment.  It also does not entail the proper security management or hardware and software on the users’ side. 

Deliverables

The deliverable of this document is a list of security and privacy issues that are faced in the cloud computing environment. The other deliverable for the project is a market analysis and a feasibility study for the proposed framework as well as the alternative frameworks of solving the impending problem being discussed.  There is also the deliverable of a framework that can adequately and effectively address the issue of privacy in the computing environment.

Stakeholders

The stakeholders in this project will include the project managers, the business analysts, the technical team and the cloud computing providers.  The project manager is to offer the projected management advice (Ferraro, 2012), the technical team is to design the framework and to make sure that it works as desired; the business analyst is to analyze the current security and privacy issues and documents them for the other stakeholders. The cloud computing providers have the responsibility of giving their opinions concerning the new implementation and the architecture of their systems.

Resources

The project requires some resources that will make it a success. There is a need for financial resources since the implementation of the proposed model will have to cost money regarding the purchase of some technologies, salaries for the developers and any other costs that may be necessary.  Project management and other personnel are also required to accomplish the project development and management so that it will be a success in the long run. 

Strategic Alignment

The project is aligned with the business requirements of operating in an environment that is free of security and privacy threats.  Businesses want to assure their customers that their details are kept in a secure manner, and it is used appropriately.  It is also a legal requirement for companies to make sure that they have properly policies and procedures for ensuring the security of their information systems. Otherwise, they will be legible to face prosecution. The project will help the businesses to achieve the compliance requirements as they also gain a competitive advantage by proper administration and management of their data and assets.

Environmental Analysis

The cloud computing environment is facing many challenges regarding the security and the privacy mechanism that are relevant for each provider. Many cloud computing providers exist, and those providers are trying their best to make sure that they incorporate mechanisms for ensuring privacy and security.  The cloud computing services are becoming a booming business because nowadays many companies want to avoid the costly devices and the complexities of managing their many distributed IT systems.   The providers offering entrepreneurs, mom-and-pop outfits, and SOHOs access to sophisticated technologies that makes it needless to hire IT consultants or technology workers.  The cloud computing services can include infrastructure as a service, platform as a service, and software as a service. Businesses and individuals can use any of these services depending on the capability that they want in their business environment.   Those services also come with their issues regarding privacy and security of data.  Some of the providers of cloud services include DropBox, Windows Azure, salefroce.com, Google, Amazon, Rackspace, among others.

Market Readiness

The potential customers for the project include all the cloud computing providers and businesses that are having plans for moving into the cloud.  That is because the cloud users and the providers are both concerned about security in the cloud computing paradigm.  The main issue that exists is that there are difficulties in the selection and implementing the right security and privacy mechanisms in this computing environment so as to resolve the issues that are currently being faced.  The resolution of the issues through the proposed solution in this project will thus be highly impressive to the users and providers.

Alternatives (business, technical, and procurement)

The success of this project will have achievement by addressing each security and privacy issue independently as there are many technologies and techniques that are specific to every issue including data integrity, privacy, data storage, availability, reliability, monitoring, identity management, averting attacks among others. Because each issue requires a different approach, there will be the usage of various techniques for addressing the problem. Businesses can also implement a single technology or platform that has most of the capabilities and then begin configuring the system slowly to incorporate the other features.  The businesses may alternatively implement security at the other levels of hardware and software in their systems if the one at the cloud is not sufficient.

Business and Operational Impacts

A plethora of impacts is possible with the implementation of this project in organizations including the cloud providers and the consumers of the cloud computing services.  The benefits will result from the efforts that will be carried out in ensuring that the information security and privacy are assured in the cloud computing paradigm. The cloud computing providers will get more clients because many fear to move to the cloud due to the impending security and privacy issues.  The clients who are enterprises will also conduct their business without the fear of losing their data or the fear of it being used inappropriately to endanger the privacy and confidentiality of the same.

Risk Assessment and Analysis

There are some risks that involve the accomplishment of the project under development.  It is therefore of paramount importance that those risks be adequately addressed to make sure that the project is a success.   The technical team responsible for technical tasks is likely to lack the appropriate technical skills regarding the new technologies that will be used in the development of the new security framework.  The technologies are also likely to change during the period of project implementation since there are new technologies that are being developed, and they are better than the current ones. The other risk for this project is the requirements for more expenditure than the one projected for due to the changes in requirements.

Feasibility Assessment and Analysis

The costs of acquiring the systems such as the monitoring systems and the encryption technologies, the development personnel salaries, and other miscellaneous expenditure will total to $50 million.  The cost of the development cannot be equated to the benefits that will accrue from the project. For instance the cloud computing companies will have a high turnout of clients; there will be improved profit margins in turn.  There will also be easy of management due to the usage of more reliable technologies and automation of many of the tasks in the cloud computing environment.

Implementation Strategy

The proper planning of project from the beginning is what can result in success and not the development task itself.  Without proper planning and strategies, the project may end up taking more time, consuming more resources and facing a lot of challenges than the ones that were anticipated.  There will be conducting of the nay always of the cloud computing environments in the market to understand the requirement for each system. All the stakeholders will have active involvement in the implementation because their input can result in the desired success.  There will be through testing to make sure that the implemented applications offer the projected for benefits and capabilities.

Project Review and Approval Process

The implementation of the project in the cloud computing environment requires an active involvement of the quality assurance officers and the cloud computing experts. They will be the ones to review the developed project to check if all the development and the coding principles were duly followed.  They will also make sure that the user specifications and the system specifications as required for the examination of the systems and user specification documents. 

Recommendations

The cloud computing environment is faced with challenges regarding the security and privacy of data, and so it is essential to address those challenges effectively so that many organizations and individuals will be attracted.   Security and privacy are crucial for the information technology because the violation of any requirements can result in a legal action being taken against the victims or the organizations may lose their good reputation and customers.  The project should, therefore, have implementations in all the organizations that are using cloud computing so that they can better perform their tasks and effectively address the issues that have been bedeviling them.  The business leaders should be educated to understand the benefits that will accrue from this project implementation so that will offer the required support for its successful implementation such as funding.

Project Sign-Off

The sign-off of a project is the approver’s acceptance of the project contents as well as the overall intention of this business case, including the commitments described for a successful delivery of this initiative.  The approver also confirms that this business case is compliant with the relevant policies, procedures, strategies and the regulatory requirements (Information Systems Audit and Control Association, 2010).  The sign-off of this business case should only be done by the incumbent authorized persons to act as representatives of the business area where the role resides.

References

Top of Form

Bottom of Form

Top of Form

Chen, D. & Zhao, H. (2012). Data security and privacy protection issues in cloud computing.  In Proceeding of the International Conference on Computer Science and Electronics Engineering (ICCSEE '12), vol. 1, pp. 647–651, Hangzhou, China, March 2012.

Ferraro, J. (2012). Project management for non-project managers. New York: AMACOM.

Information Systems Audit and Control Association. (2010). The business case guide: Using Val IT 2.0. Rolling Meadows, IL: ISACA.

Kshetri, N. (2013). Privacy and security issues in cloud computing: the role of institutions and institutional evolution. Telecommunications Policy, 37( 4-5), 372–386.

Mahajan, P. et al. (2011). Depot: cloud storage with minimal trust. ACM Transactions on Computer Systems, 29(4).

Pearson, S. &  Benameur, A. (2010). Privacy, security and trust issues arising from cloud computing. In Proceedings of the 2nd IEEE International Conference on Cloud Computing Technology and Science (CloudCom '10), pp. 693–702, IEEE, December 2010.

R. Neisse, R.,  Holling, D. & Pretschner, A. (2011). Implementing trust in cloud infrastructures. In Proceedings of the 11th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid '11), pp. 524–533, IEEE Computer Society, May 2011.

Bottom of Form

Thursday, 10 March 2016 04:57

The Employment Process at Sriven Technologies

Written by

1st iteration

Name

Course

Instructor

Date

Iteration 1: The Employment Process at Sriven Technologies

The iteration of the employment process at Sriven Technologies, Inc will involve the inquiry process to find how this company conducts the employment process particularly regarding the employment of web-based applications developers. The process of accomplishing this iteration will have involvement of the company human resource personnel from the company although I will also leverage the Web to find more information on the same. The iteration is planned to take two weeks whereby the working hours will be from 8 am to 5 pm, the normal office hours.

Plan

I planned to draw the objectives for this iteration before commencing anything in light of the duties that lay ahead of me concerning web-based applications programming. My objectives learning about the employment process, what is required in the process, who are involved, and what I would do so as to meet all the requirements for me to become qualified for the same. The human resource would help me understand how to make an application for employment into the company, what are the qualifications they look for and where the company advertises their employment opportunities. I planned to know if the company makes advertisements via word of mouth, on the newspapers of they do on the Web.  That was because companies have different platforms on which they make their advertisements and each of them follows a unique process of finding and recruiting new employees.

I would also try to understand from the human resource personnel what experience is required for the web-base application programmers and how the company measures that experience. I would also understand if the company recruits fresh graduates and what they would expect of them.  The other thing that I planned to ask the human resource personnel to address is the people that are responsible for recruiting the various types of employees. That is because there are many areas of professionalism including IT, human resource, finance, research, and innovation, etc., and there should be specific managers that should be responsible for employing people in a given area.  Other companies do just leave the work or recruiting entirely to the human resource manager (Monroe, H. Personal Communications, February 05, 2016). That would be problematic especially when a company has only a single human resource manager who obviously does not have competence in all the areas of professionalism.

Action

I met with the human resource manager on the day when I was to commence my internship in the Sriven Technologies, Inc. The human resource manager, first of all, introduced me to the human resource personnel of the company so as to help me prepare adequately for the task that lied ahead of me, understanding the recruitment process. He helped me to understand the types of human resource personnel that the company had including the operational human resource manager, the transformational human resource manager, and the relational human resource manager.  He also explained to me their roles, whereby the operational HR is responsible for the administrative tasks, the relational HR was for supporting the business processes, and the transformational HR was concerned with the strategic HR tasks like knowledge management (Delbridge & Keenoy, 2010). I also came to understand that the task of the human resource management sector is the one that handles the issues related to people like compensation, recruitment, organizational development, benefits, performance management, communication, training, administration, safety, employee motivation and wellness of the employees.

The human resource personnel collaborated in helping me understand the employment process in the organization and who were involved in accomplishing this employment process. The ones responsible are the three of them, that is, the operational, the relational and the transformational human resource managers. The process of getting a job in the company involved making an application for the position advertised, attending an interview session, and then receiving employment depending on the score of the interview session. During the interview process, the company involves the staff that has training in the given area where an interview is seeking employment (Johnson, K. Personal Communications, February 12, 2016). For instance, my interview would involve the web-based application programmers who will be working in conjunction with the human resource personnel to accomplish the process of employment.

They helped me understand that the interview tested mostly on the practical skills in the area that one was seeking for employment in the company. The company could hire even the fresh graduates provided that they had the competence required to execute their duties as required (Magdalene, O. Personal Communications, February 13, 2016). That gave me a chance to one of the candidates in case the company would have that position soon. They also helped me understand the experience required for a web-based application programmer so that I would prepare to acquire those skills ahead of my employment process in the company.  I also learned that the company made their advertisements for job positions in the Web platform as well as the newspaper so as to reach a wide spectrum of applicants for the position. That helped them get the one that is most competent for that position. That also helped me to strive and acquire the necessary skills as I would be competing with several people for the same position.


Observation

Organizations are relying on human capital because it is the most valued and treasured asset for the performance of an organization (Bhoganadam & Rao, 2014). I observed that the human resource management department is an essential sector in an organization because it is responsible for ensuring that everything runs well. They ensure that the employees are highly motivated for their tasks, they get the right terms regarding payment, and they ensure that they replace the employees that leave a job for one reason or another (Perry & Debra, 1997). The human resource managers had many involvements in the company, and they had to do everything they could to ensure that the employees exploit their potential and make the performance of the organization be at the top. The researcher also observed that the employment process in organization entails a lot other than just making an application and waiting to be called for interview. I thought the iteration would involve only the interview and orientation into the company, but I observed that there were many things that I had to know.

I observed that understanding the employment process in an organization can help one know how to go about in meeting the specific requirements so as to stand at a high chance of getting an employment opportunity in a given organization. I observed that I would use this iteration to prepare adequately for employment process in the company. I observed that so many people apply wherever there is an advertisement of a vacancy in the organization, but what differentiates the applicants is the competence they have in executing their duties as required by the organization. Because of the many applicants, I observed that an interview is very important in filtering them out so as to get the ones that meet the desired qualifications and practical skills (Billikopf, 2006).


Reflection

The iteration and my meeting with the company human resource personnel were very helpful because I was able to meet my objectives that I formulated at the beginning of the iteration.  It gave me profound knowledge and experience regarding human resource management, web-based applications programming and knowledge reading the employment process in an organization. I not only received the knowledge and experience of the employment process, but I also got to know what to do so as to stand at a better chance of getting the employment as compared with the applicants that might have applied for a similar job. I came to realize that the employment process in an organization is vital because that is what helps an organization get the right people for the position advertised (Johnson, K. Personal Communications, February 18, 2016). The employment process iteration was very enriching with information that would help me prepare adequately in the acquisition of web-based programming skills aimed at garnering that position.

Their things that did not go as I had anticipated during this iteration of the employment process in Sriven technologies. I also did not get to learn the other things that were on my list due to time limitations. For one, I had a plan of doing a search on the Web as an integral activity in the meetings in the company but is did not do that. That must have made me leave out vital information that would be helpful in my research. In the future, I will make an improvement to that so as to ensure that I use all the available media and platforms to gather the necessary information to accomplish my tasks adequately. I also discovered that the company did not have a technical recruiter that should be responsible for recruiting the technical personnel. I suggest that the company have three types of human resource personnel plus some specific recruiters so as to accomplish the employment process in the right way.

References

Bhoganadam, S. & Rao, D. (2014). A study on recruitment and selection process of Sai global Yarntex (India) Private Limited. International Journal of Management Research & Review, 4(10), 996-1006.

Billikopf, G. (2006). Practical Steps to Employee Selection. Retrieved from https://nature.berkeley.edu/ucce50/ag-labor/7labor/02.htm

Delbridge, R., & Keenoy, T. (2010). Beyond managerialism?. The International Journal of Human Resource Management, 21(6), 799-817.

Perry, L. & Debra, J. (1997). Strategic Human Resource Management, Public Personnel Management: Current Concerns, Future Challenges (2nd Ed.). Carolyn Ban and Norma M. Riccucci. New York: Longman.  Pp. 21-34.

Thursday, 10 March 2016 04:55

Literature reviews and proposal

Written by

Literature reviews and proposal

Name

Course

Instructor

Date

Literature review

In a couple of years, there has been increasing the popularity of web based applications. There are some factors that contribute to that tremendous rise in their use by organizations and individuals in the provision of access to a variety of services. Today many organizations and individuals use the web-based applications to in securing critical environments like financial, medical and military systems.  Web-based systems consist of infrastructure components like databases and servers, as well as application specific codes like server-side CGI programs and HTML-embedded scripts (Kalani & Kalani, 2004). Experienced programmers are the ones that develop infrastructure components, and the programmers have little security training and have to develop the codes under a strictly time constraint. As a result, they develop and deploy to the whole Internet web-based applications that are vulnerable, creating easily exploitable points that can lead to compromising of the entire networks.  The amelioration of those security issues of web-based applications requires that you design and develop a web-based application that is secure. Testing of the web-based application also vital but it cannot take pace minus a thorough analysis of the current security threats.

Overview of Web-based Application

Today many enterprises are utilizing the web-based application as a solution that offers low-cost as well as a flexible way of distributed collaborative work.  A web-based application not only disseminates work, but it also interacts with the users in the processing for their business tasks so that they can accomplish their business goals. Thus, programming and analysis of web-based application need an approach that is different from the one for websites that offer information in a uni-directional manner on the user’ requests (Nielsen, 1995).  Programming the web application requires that the developer emphasizes on a good visual design and offer a systematic way of designing the logical structure of the application.  There also exists a method for designing a web-based application. Those models are very useful in the modeling of kiosk-type applications that help in navigating the users to the desired information on the web in a systematic manner.

However, for the users of web-based applications, the access of particular information they want is only part of their business goals.  There are other business goals such as processing of their business data, communicating and collaborating with their colleagues through the use of the web-based application.  The formal methods that exist do not provide solutions to critical questions pertaining the programming and analyzing of the web-based application (Kolˇsek, 2002).  Some of those questions that remain unanswered include, “How can users achieve their business goals while using web-based applications?” “How do users interact with their colleagues while using the web-based application?” Maintenance is also another crucial issue as websites are increasing in size.  Tools that exist such as the WebAnalyzer are useful in identifying the broken vulnerabilities, but they fail to offer a solution to or the way of avoiding those problems.  Organizations can reduce their maintenance costs if they can detect errors in the design and analysis phases (Davis, 1990; Humphrey, 1989).

Technologies

There has been a continuous evolution of technologies for implementing web-based applications since the inception of the first mechanism for creating dynamic websites.  In the subsequent paragraphs, there are the steps in that evolution.

Common Gateway Interface

The Common Gateway Interface (CGI) was the one of the first mechanisms used in the generation of content (Laverty & Scarpino, 2009). The common gateway standard defines a mechanism the server uses in interacting with external applications.  It specifies the rules of that interaction; however, it does not dictate the usage of a specific technology for implementing those external applications.  That means the programmer can write the CGI programs in any language and execute them on virtually all web servers.  The goals of invoking the CGI were to offer a web-based interaction with the legacy systems (Kalani & Kalani, 2004).  In that case, a CGI program functions as a gateway between the legacy system and the web server.   There is the CGI specification that defines various ways on how the web server communicates with a CHI program.

Embedded Web Application Frameworks

 Nowadays, the most common method of approaching the implementation of a web-based application is the use of a middle way between the CGI mechanism and the sever-specific APIs (Umar, 1997). In this technology, you provide the web server with an extension that implements the frameworks for developing web applications.  Examples of those frameworks include the compiler or interpreter that is useful in encoding the application’s components and defining the rules that control the interaction between the application components and the server.  Frameworks do vary greatly depending on the support provided by the application developer. There are frameworks that only provide mechanisms for handling HTTP-specific features like cookies, connection handling, and authenticating mechanism among others.  These web application frameworks have provision through such programming languages such as Perl, Python, PHP, Java, Visual Basic, and JScript and C # (Keig, 2013).

Importance of Web-based Application

 Web-based applications are the way to take advantage of the current technology in enhancing the productivity and efficiency in organizations.  They provide businesses with an opportunity of accessing their information from anywhere across the globe anytime (Grove, 2010).  It also helps the organizations to save money and time as well as in improving the interactivity with their clients and partners.  A web-based application also allows the administration staff to perform their duties from any location and the sales staff has the ability for accessing the information from a remote location 24 hours a day and seven days a week (Curphey et al., 2005).  The only thing that one needs is to have their computers connected to the Internet, have a web browser, the username, and the password and then they can access the corporate systems from anywhere.

A web-based application is easy to use, and it can have an implementation without any interruption to the existing work process of the organization.  Whether an organization requires an e-commerce system or a content managed solution, they can develop a customized web application that can meet their business requirements (Grove, 2010).  The web-based software enables companies to interact with their applications as well as their data in a highly responsive and fluid manner.  With the right expertise in the creation and implementation of a web-based application, a company can have an edge over its competitors.

Proposal

My internship in Sriven Technologies will help in performing web-based applications programming and analysis that will be of benefit to the organization at large and me. I will have an engagement in critical tasks such as the review of codes, the design, development, testing and supporting of the web-based applications. The internship will consist of five iterations with each having a cycle of planning, acting observing and reflecting to offer an opportunity to refine further the actions.


Iteration 1: The Employment Process at Sriven Technologies

In this first iteration at Sriven Technologies Inc, I will carry out an inquiry to find out the employment process in the company in light of programming and analysis of web-based applications. I will meet with the human resource personnel from the company, and they will guide me through the employment process as a web-based application programmer and developer. The Web will also be of great help as it will be the platform of interacting with those resource persons.

Iteration 2: Brainstorming

In this iteration on brainstorming, I will meet with the company’s web-based application developers who will take me through the skills I require to qualify to be an expert in web-based application programming and analysis.  Many web-based application developers will be in the meeting so as to provide me with the knowledge of the skills I require to be competent in the area of web-based application design, development, and analysis.

Iteration 3: Training

In the training iteration, I will meet with the web-based analysts and the project manager to help me in understanding how to conduct a web-based application development and analysis. They will train me on various approaches to developing a web-based application and enhancing the proper security features on the same. The project manager will also guide me trough the stages of project development and the deliverables in the various stages of the work breakdown structure.

Interpretation 4: Understanding the Analysis and Design of a Web-application

In this iteration on understanding the analysis and design of a web-based application, I will meet with the web application developers, and they will help me with the way to go and the right methodology to use in designing and analyzing a web-based application.  That will be the background for the next phase of performing a penetration test project on a client’s web application. The method I will understand is the one that entails entity relations analysis, scenario analysis, and architecture design since it is one of the most reliable methods of analyzing and designing a web-based application.

Iteration 5: Project on Penetration testing of the Client’s Website

In this iteration, I will have involvement in conducting a penetration test for one of the company’s clients as my main project in the company. I will use the skills gained from the previous iterations and ensure that I perform comprehensive penetration tests for the client. I will carry out this task with one of the company’s junior web-based application analyst to act as my supervisor. I will carry out all my activities while consulting that supervisor. The quality assurance team will then help in the remediation of any vulnerability found as will deem appropriate.


References

Nielsen, J. (1995). Multimedia and Hypertext the Internet and Beyond.  Academic Press.

Laverty, J. & Scarpino, J. (2009). Web Application Security Instructional Paradigms and the IS Curriculum. Issues in Information Systems, 10(1), 87-96.

Kolsek, M. (2002). Session Fixation Vulnerability in Web based Applications. Technical report, ACROS Security.

Curphey, M., Wiesman, A., Van der Stock, A. & Stirbei, R. (2005). A Guide to Building Secure Web Applications and Web Services. OWASP.

Top of Form

Grove, R. F. (2010). Web-based application development. Sudbury, Mass: Jones and Bartlett Publishers.

Top of Form

Umar, A. (1997). Application (re)engineering: Building web-based applications and dealing with legacies. Upper Saddle River, N.J: Prentice-Hall.

Top of Form

Kalani, A., & Kalani, P. (2004). Exam Cram 2: Developing and implementing web applications with Visual c# .Net and Visual Studio .Net ; [exam 70-315]. Indianapolis, Ind.: Que Certification.

Top of Form

Keig, A. (2013).Advanced Express Web Application Development. Packt Publishing: Birmingham.

Bottom of Form

Bottom of Form

Bottom of Form

Bottom of Form

Thursday, 26 November 2015 07:58

The SDLC Models

Written by

The SDLC Models

Course title

Student’s name

Instructor

Date

The numerous Software development life cycles models that are used to develop Software. The SDLC models come in to give a theoretical guideline regarding the development of software. SDLC models provide a systematic way of developing software that delivers within the time deadline and has proper quality. Through employing proper SDLC models, project managers can regulate whole development strategy of the software. This paper will compare the 7 step model and the 4 step model (Stefanou, 2003).

The 7 Step Software Development Life Cycles (SDLC Models)

Assessment phase-This is an information intensive phase where there is a requirement definition(RAD) which describes the required functionality, environment and interface for the project .It is a phase where the proposed project estimates determination happens, and a preliminary project plan is made. These documents outline exactly how and when the project is proposed to develop and delivered. It also, gives an opportunity to vary the project requirements, if necessary, before the project and it’s a phase we precaution taking takes place (Khurana & Gupta, 2012).

System Analysis-This is a step that comes after the proposal is accepted, The CGC –Online team, begins to work to arrive at a detailed functional specification, which defines the software behavior. On approval of Functional specification document, a project plan is submitted (Khurana & Gupta, 2012).

Software Design-In this phase both the high-level and the low-level design are made based on the software analysis report and functional specs. The software architecture or the high-level design drafting by the Chief Architect takes place. Later on the low-level design Documentation(ER diagrams/Normalized structures) describes the internal architecture of the software preparation by the software engineering team. The document forms the basis of coding (Khurana & Gupta, 2012).

Development (coding) - The low-level Design Documents is the document that the programs use exclusively. They also use custom coding standards defined in the Computer Guard quality manual that is a combination with the help of state-of-the-art tools and technologies code writers develop the application and ensure quality standards maintenance (Khurana & Gupta, 2012).

Testing Quality-This is the next step where quality assurance specialists begin work from the very first day of the project. The Design Specifications have to meet strict user’s reliability as well as convenience and also Functional Specifications have to maintain a level of satisfaction and achievement.

Further, a detailed test plan with test cases is developed and methodically followed throughout the coding phase as well as testing of units. Other tests include the integrated module testing, independent module testing. Coding and testing being recursive activities they have to run side by side in a closed loop until the system achieves functionality defined in detail test plan. When dealing with internet based applications, the system is again tested online after deploying the site on a remote web server (Khurana & Gupta, 2012).

The other two steps that follow are a software quality control measures, and they are the implementation phase and the support phase

Implementation-The software, the QC clearance is delivered and integrated into the user’s environment. At this point, the project is 80% complete. The project is 100% complete only when the system is very stable and operational in full and in real time (Khurana & Gupta, 2012).

Support-This comes as the final phase when the system is functional and installed. The computer Guard support team provides complete maintenance in case of an annual maintenance a contract. The alternative to a complete maintenance is a three-month free maintenance provision and also a recommendation of future enhancement is crucial on an as-needed basis (Khurana & Gupta, 2012).

The 4 Step Software Development Life Cycles (SDLC Models)

The 4 Step SDLC takes four steps that are planning, analysis, design, and implementation. The details of the steps discussion are below;

Planning- This is the first phase of the 4 Step SDLC model where the steering committee receives a request to develop a project. The committee focuses on a review of the project request,   focus on project prioritization as well as an allocation of resources and the identification of the project development team. A feasibility analysis is done to measure how suitability of the software development to the company that is the viability of the project. The four feasibility tests carried out are

  • Operational feasibility- To guarantee the future software operations
  •  Schedule feasibility- To ensure that the project will operate within the specified period, and it becomes the point of refers throughout the project on checking what should get done when and why.
  •   Economic feasibility (also called cost/benefits feasibility) - Done to ensure that the company can gather the required resources for the project completion.
  • Technical feasibility- This helps the organization determine whether or not the new software is feasible. The never a need to use a resource to build something that cannot work

This technical phase is a phase where the purpose of the whole project understanding takes place as well as the information building understanding as well the value of the project to the organization (Massey & Satao, 2012).

Analysis- This takes the place of a second phase in the 4 step SDLC model where two main things take place that is the conduction of the preliminary investigation also known as the feasibility studies and the detailed analysis of performance.

In the analysis of performance, the following three things take place

  • A study of the working of a current system working
  • Determination of the users wants
  • Recommendation of a solution also regarded as the logic design

In the preliminary investigations, there is the determination of the exact nature of the problem or the improvement and whether it is worth pursuing or not. It is the assessment of the practicality of the proposed project, and the aim is to rationally and objectively uncover the strengths and the weakness of the existing software as well as the proposed software. It is through the investigations the treats, as well as the opportunities of the proposed project on the environment, identification. The findings are presented in a feasibility report also referred to as a feasibility study (Massey & Satao, 2012).

Design- This is the third phase of the 4 step SDLC model where hardware and software acquisition takes place as well as a development of a new or a modified information system. There is always the development of a detailed design that is a detailed specification for components in proposed solution. The activities input and output design, physical design, the interface design, architecture design, and program design (Massey & Satao, 2012).

Implementation- It is the final step in the 4 step SDLC model where the actual building of the system and installation happens. The purpose of this phase is to construct and build new or modified software and the deliver it to the user. The main activities are

  • Convert to new system
  • Train users
  • Install and test new system
  • Develop programs

Also, their various test carried out in this phase, and they include;

  • Unit Test –To Verify each program works by itself
  • Systems test – To Verify all programs in application work together
  • Integration Test- To Verify application works with other applications (Massey & Satao, 2012)

Comparison of the 7 step SDLC model with the 4 step SDLC Model

In making a decision on which model to use there is always comparison if the models especially if the requirements are known the way in advance and very well understood. For instance, we a comparison of the models to know which model is suitable for full control over the project or the size of the project (Amlani, R. D. (2012).

The 4 step model is suitable for complex projects as detail analysis is carried out as well as a lot of testing to ensure the software is working as expected, unlike the 7 step model where the testing focus on quality rather than functionality and the analysis is less detailed (Amlani, R. D. (2012) .

The 4 step model involves user participation thus increasing the chances of early user community acceptance and realizes an overall reduction of project risk, unlike the 7 step model that involves the user less (Amlani, R. D. (2012).

FEATURES

The 7 Step SDLC Model The 4 Step SDLC Model
Requirement Specifications Beginning Beginning
Cost Low Expensive
Simplicity Simple Intermediate
Risk Involvement Low Low
Expertise High High
Flexibility to change Difficult Easy
User involvement Only at the beginning High
Maintenance High Low
Duration Long Long

         Table 1(Comparison of the 7 step SDLC Model with the 4 step SDLC Model)

 (Amlani, R. D. (2013).

References

Amlani, R. D. (2013). Comparison of Different SDLC ModelsInternational Journal of Computer Applications & Information Technology,2

Amlani, R. D. (2012). Advantages and limitations of different SDLC models International Journal of Computer Applications & Information Technology,1, 6-11

Khurana, G., & Gupta, S. (2012). Study & Comparison of Software Development Life Cycle ModelsIJREAS,2(2), 1-9.

Massey, V., & Satao, K. (2012) Comparing various SDLC models and the new proposed model on the basis of available methodologyInternational Journal of Advanced Research in Computer Science and Software Engineering,2(4), 170-177

Stefanou, K. (2003). Software Development Life Cycle Encyclopedia of Information Systems4, 329-344.

2014 Computer Science.
Powered by Joomla 1.7 Templates
Trusted Site Seal Comodo SSL Certificate SSL