User talk:Parrytomar

From WikiEducator
Jump to: navigation, search
Welcome to WikiEducator!
I'm your WikiNeighbour and I welcome you to our dynamic community of 80,739 WikiEducators.
I see you are starting your WE userpage! Enjoy the journey and work at your pace. I'm certain you will become a great WikiEducator. Congratulations!

If there's anything I can help you with, please let me know!

Also, join our live Wikieducator session going on this Wednesday! --Benjamin Stewart 04:30, 20 July 2010 (UTC)

Hi How r u


Thread titleRepliesLast modified
ASEi005:46, 8 December 2021
QoS Based Web Services008:03, 14 May 2017
Software and System Erosion Systems007:56, 14 May 2017
Sentiments Analysis through computer and electronics science007:48, 14 May 2017
Recommendation Systems007:43, 14 May 2017
Optical Switches and Communication007:27, 14 May 2017
Optical Waveguide and IoT007:27, 14 May 2017
Component-Based Software Engineering for Integration and Communication007:25, 14 May 2017
Internet of Things Background and Architecture017:58, 28 April 2017

Advanced software engineering

Parrytomar (talk)17:50, 25 May 2021

QoS Based Web Services

Pradeep Tomar and Gurjit Kaur

Web Services are the implementation of client and server architecture in which various client and server applications communicate by exchanging data over hypertext transfer protocol over the web. The means of interoperable software applications over various platforms are web services. Web services are distinguished by their interoperable and extensible nature, with their descriptions, using XML. For attaining complex operations loosely coupled web services are used. To achieve values added services, simple web services interact to each other by exchanging data. Various open standards like Simple Object Access Protocol, Web Service Description Language, XML and Universal Description and Discovery Integration are used to integrate web applications through the Web services. Different standards have different roles, the tagging of data is done using XML, transmission of data is done using simple object access protocol, web services are described using description language and listing of available services is done trough UDDI. Various application from differing sources can communicate in real time through the web services, since XML is used for communicating. There is no restriction on using any specific operating platform or language for functioning of web services.

Role of Functional and Non- Functional attributes Web services are gaining interest with every second, thus it is becoming very important to differentiate between the services that perform the same kind of functionality. For this purpose functional and non-functional attributes play an important role. The functional and non functional attributes help the users to decide whether to go for the service or not. Several web services provide the relatively similar kind of functionality but have different QoS parameters involved. In order to provide the user with suitable service it is required to understand the need of the user. The QoS parameters are scenario based so are entitled to change in different cases. Since multiple QoS attributes be required at a time by the user, it is important to take composition of these QoS for selection of the web service. • Reliability Reliability refers to the ability to perform intended operations under some stated conditions with available resources. The web services should be reliable to provide the most hassle free experience to the end user. • Robustness A robust web service should perform consistent even when partially fed with ambiguous input. Web services should have a high degree of consistency. • Availability Considering immediate usage of the service, the service should be always be live or running. The service should always respond to every valid request of the user. The availability is the probability that the system is up and related to reliability. • Rating This QoS attribute is based on user or website submitted usage-statistical data. This attribute will help in knowing which service is most frequently used or well known among the users. • Service This attribute provides data regarding response quality to request. An important consideration will be quality of service with respect to cost of the web service selected. Better the service according to the price, better will be the rating for that very service. • Cost This attribute refers to the charges of using the service. It is an important factor in deciding whether to select the service or not. The better the service with equate availability and lower cost of service, will benefit the service provider with good rating. • Interoperability Interoperability refers to the ability of being operable at different environments so that the programmers don't have to worry for services to be written in any specific language or for any specific platform. Web services should be able to operate on changing platforms. • Security Service providers enforce enhanced security for web services by achieving protocols like confidentiality, authenticity, data integrity, encryption.

FUZZY APPROACH FOR WEB SERVICES A fuzzy logic is a computing technique which is widely used in various fields. It was developed by LotfiZadeh in the 1960s and 70s to model those problems in which uncertainty factor is involved. Fuzzy logic is a good extension of ordinary logic, where the main advantage is that we use fuzzy sets for the membership of a variable. Fuzzy logic can give many advantages over the ordinary logic.

Fuzzy Inference System The traditional idealistic mathematical approach has been improved to accommodate partial truth by the Introduction of fuzzy set theory by Professor LotfiA.Zadeh, [12], in1965. Fuzzy logic provides a convenient way to represent Linguistic variables and subjective probability. The motivation and justification for fuzzy logic is that the linguistic characterizations are less specific than the numerical ones. Most situations in the world require crisp actions. These actions are arrived by processing fuzzy information in figure. Fuzzy logic is used to provide means of inferring the fuzzy information to produce crisp actions. Fuzzy logic provides the tools to:  Fuzzification: Transform world information from crisp to fuzzy information.  Inference: Infer the fuzzy information to come to a fuzzy action.  Composition: Aggregation of the outputs of all the fuzzy actions.  Defuzzification: Transform back the fuzzy action to a crisp action.

Fuzzificaton Fuzzification is the process of making the crisp quality “fuzzy”. This allows addressing uncertainty in any parameter due to imprecision, ambiguity or vagueness. In artificial intelligence, the most common way to represent human knowledge is in terms of natural language i.e. linguistic variables. Depending upon the data and uncertainty, the inputs and the output parameters are fuzzified in terms of linguistic descriptors such as high, low, medium, and small to translate them into fuzzy variables e.g. fuzzy boundaries are parameters “age ”can be formed by the linguistic expressions such as “young”, “middle aged”, and “old”. Therefore, fuzzy sets for the inputs parameters and the required single output parameter are formulated based on the expert’s knowledge and experience in the particular domain.

Inference Having specified the expected number of faults and its influencing parameters, the logical next step is to specify how the expected numbers of faults vary as a function of the influencing parameters. Experts provides fuzzy rules in the forms of if..then statements that relate expected number of faults to various levels of influencing parameters based on their knowledge and experience. Fuzzy processor uses linguistic rules to determine what control action should occur in response to a given set of input values. Rules evolution also referred as fuzzy inference, evaluates each rule with the inputs that were generated from the fuzzification process.

Composition The inputs are combined logically using the AND operator to produce output response values for all expected inputs. The active conclusions are the combined into a logical sum for each membership function. A firing strength for each output membership function is computed. The fuzzy outputs for all rules are finally aggregated to one fuzzy set for various levels of consequence.

Defuzzification The logical sums are combined in the defuzzification process to produce the crisp output. To obtain a crisp decision from fuzzy output, the fuzzy set, or the set of singletons have to be defuzzified. There are several heuristic methods (defuzzification methods), like bisector method or the centroid method . One of them is e.g. to take the center of gravity as shown in the figure below. For the discrete case with singletons usually the mean of maximum method is used where the point with maximum singleton is chosen.

REVIEW OF LITERATURE OF QOS BASED WEB SERVICES N. Hema priya [1] proposed a architecture for the Web Services, Fuzzy Rule based algorithm. The Architecture contains of a Client, Service providers and their Registry as shown is fig.2 , which are considered as the main building elements for web services. There are number of service providers with different services, they get their service level agreements registered. Then the user searches for services according to their needs. Based on the type of services, some providers to provide authentication. Various QoS attributes are considered and their composition is also taken (if needed), using the proposed Web Service Architecture.

The architecture contained of a Fuzzy service discovery broker that acts as a middleware, a Fuzzy Engine, a Fuzzy Classifier for evaluating the QoS criteria for the registered service. Fuzzy Engine uses repository stored Inference Rules and gives a weight to each service. Various standards are used to implement web services, these standards play a different role in the whole service architecture. A service provider uses the internet to publish the service, Web Service Description Language handles the description whereas Universal Description and Discovery Integration keeps track of the files, and Simple Object Access Protocol is used to invoke the service whenever requested. Shuping Ran [2] discussed that web services have gained interest but the adoption hasn't kept pace and investigates that QoS parameters is one of the main reason for the slow adoption. The paper proposed a model for discovery of Web services in which various functional and quality of service attributes were used for discovering the service. Previous generation web services discovery models are highly unregulated because of the UDDI registries. A very high percentage of UDDI registries have unusable links. A new regulated model is proposed which can exist with unregulated registries. The purpose of unregulated registries was to offer services to those for whom service quality is of no significance. The applicants who needed service quality assurance were served using the regulated proposed model. Doru Todinca [3] proposed an approach in which user preferences and QoS characteristics play major role in selection of the services. The approach consisted of a vocabulary of description of services and their domain, and user preference based service selection that is handles by a trader, employing impaired comparison of services and algorithms for their ranking. The idea of this approach was that from using user preferences , automated fuzzy rules are used in fuzzy inference system for ranking the service. The paper presented the trials to estimate their approach consisting of prototype enactment of a service broker. A new prototype implementation of a service broker is devised, and fuzzy inference rules are used to solve the problem of selecting most apt match for each request, according to the QoS requirements, from a pool of imperfect services. This approach proposed a method to accomplish the QoS information using fuzzy categories, way to describe choices and requests in the service broker tool. This approach was capable of generating automated fuzzy rule for each set of individual choices or preferences. After generation of fuzzy rules, each candidate form the service pool was tested against these rules. Masri and Mahmoud [4] proposed an idea to solve problems using keyword search mechanism and Web service Relevance Function that assigning, measuring relevancy of the service. A crawler engine was employed to provide quality ranking. The users could use this model for searching and managing criteria based on their preferences. The service with highest rating is considered the most relevant as per interests. This approach reduced the cost of service. In this paper, for searching apt web services a blend of web service attributes were taken as constraints. This allowed to extend the web services repository building architecture by handing out quality driven findings of web services. This approach showcases the effectiveness of employing QoS attributes in search requests , outputting results as constraints and elements. To install confidence among users before calling for a service, proper information and service assurance was provided about that service by employing QoS attributes while finding web services by preferences. A service ranking mechanism was also proposed. Shen and Su [5] regarded a new model for web services based on automata and formal logic. To represent the semantic properties on service behaviors a new query based language was developed. Chengying Mao [6] adopted a method which used Petri Nets for web service composition to compute the complexity. The method provided two metric sets for evaluating the composition, logic of execution and dependencies in the workflow, as in a business operation depiction. This paper proposed some metrics for complexity of Web services workflow described by Petri-Net. Initially Petri-Net representations and their corresponding basic elements of business process were analyzed. Following that, control flow metrics were devised. Later in this paper the main aspects of web service composition were addressed. The two types metrics were proposed for workflow of web service composition, which are count-based metrics and execution path-based metrics. Data mining technique (WEKA) is used in Susila and Vadivel (2011) [7] presented such a scheme, which used Web Service Discovery Language files to choose the most apt service . This paper proposed extended Service Oriented Architecture which used data mining technique over QoS attributes to discover suitable web services. QoS attributes are taken into consideration like Availability, Security, Latency, Cost, then WEKA algorithm is applied to the data set. It provides tools for classification of data, clustering of data, regression, pre-processing and visualization. Palanikkumar [8] used Bee Colony Optimization Metaheuristic for optimizing the QoS locally ,which is based on evolution. To solve deterministic and combinatorial problems, BCO is employed in this approach. Qusay H. Mahmoud , Eyhab Al-Masri [9] advocated that the users shouldn't waste endless times to go through the UDDI based business registries to find suitable web services on mobile devices. The process of searching for these web services must be very effective and effortless. This paper discussed issues related to effective and time saving access and discovery of web services across several business registries. In this paper n new discovery engine was introduced named Web Service Crawler engine. The role of WSCE is to go through several registries and generating a central repository of web services which was used in faster and efficient discovery of the web services. This paper presented a new framework that was capable of extending the Web Service Repository Builder architecture by improving the discovery of web services without having to change current standards. This work introduced new crawler engine that was able to crawl across different available UBRs. Hoi Chan, Trieu Chieu [10] described a new method in which the web services were ranked and selected based on certain QoS attributes and prior knowledge. In this approach the web services QoS attributes were treated and web service relationships were targeted and represented as matrix. Singular value decomposition technique and adaptive weighting system were used to get the high order correlations among web services and their related QoS attributes to estimate the selection of recommended web services. A new approach was proposed for efficient selection and composition of web services using the SVD technique. Also the idea of Quality matrix for web services was introduced as a model for storing QoS information for every service which is being monitored. Glen Dobson, Stephen Hall [11] described a non-functional requirement ontology that was utilized for structuring and expressing constraints as a part of service quality specification. The ontology proposed was a part of European funded SeCSE integrated project. Liangzhao Zeng, Anne H.H. Ngu [1] proposed an loose, unbiased and potent model for QoS computation for selection of web services by implementing QoS registry in an unrealistic phone service for market place application. A framework was proposed in which the QoS model was extendible, QoS information was either given by service providers or computed from execution monitoring done by the end users or was gathered from the customers feedback considering the QoS criteria. This framework was aimed at enhancing the QoS modeling, computation. The proposed framework consisted of extensible QoS model, preference oriented service ranking and unbiased and open QoS computation. Qian MA, Hao WANG [13] proposed a framework for Semantic web services which was semantic QoS aware, which was achieved by formulation of semantic matchmaking and constraint programming together. Firstly, to define quality of service data was presented in service descriptions using QoS ontology. After this, syntactic matchmaking was changed into semantic way by employing ontology reasoning. When it was confirmed that various concepts are compatible, complex Quality of service conditions were solved and a algorithm for selection was proposed to provide optimal deal. Lijun Mei, W.K. Chan [14] presented an adaptable framework that prohibited problem causing external services from being used in service based application in a company. The framework used effectual web service description language information in public registries to estimate a schema of the network of the services. Along this, link analysis was done on the schema to recognize the services that were popular among different service consumers at any particular instant. Service composition was procedurally made using the services that were highly popular. The framework was capable of recognizing reliable external services for use in service based applications within organizational atmosphere. In this framework, a consumer X using a service Y, was capable of counting the number of times Y served X, and the number of cases of failure. Daniel A. Menasce' , Vinod Dubey [15] extended the previous work that was done on QoS brokerage for service oriented architecture. In this paper a service selection QoS broker was designed, implemented and evaluated, that increased a utility function for consumers of the service. The purpose of utility function was to enable the stakeholders to assign value to the system considering it as a function of various attributes like efficiency, availability and response time. This work assumed that the users will provide their utility functions along with the cost preference on the requested service, to the QoS broker. The implementation of broker and the service was done on java enterprise edition platform. This work also addressed the performance and availability issues of the QoS broker and was extended to provide a flexible and loosely coupled integration scheme. This paper also presented ideas by developing components and services using java enterprise edition platform.

Parrytomar (talk)08:03, 14 May 2017

Software and System Erosion Systems

Pradeep Tomar and Gurjit Kaur

SOFTWARE and SYSTEM EROSION Software erosion or software rot is the gradual degradation of software performance and responsiveness over time which ultimately results in software becoming faulty or unusable in a productive environment. Software erosion is generally categorised into two types: • Dormant erosion – This is the type of software erosion which occurs when the software is not used too frequently which eventually results in the transformation of the environment in which the software operates. The gradual variation in the rest of the software application and user needs also contribute to its erosion. • Active erosion – This is the type of software erosion which occurs when the software undergoes continuous modifications and changes to its design and architecture. Though most software do require constant upgradation but this ultimately leads to an evolution process which makes the program differ from its original design. As continuous evolution goes on the original program logic may get invalidated, presenting new bugs. Factors Responsible for Software and System Erosion There could be several factors which are responsible for software erosion at a particular time. It is rarely a case when solely one factor is held responsible. Some of the factors include the transformation of the environment in which the software operates, degradation of compatibility among different sections of the software itself, and the appearance of bugs in rarely used code. The various software erosion symptoms or factors have been presented in later chapters for clear understanding of the causes behind software erosion. Also the maintenance actions corresponding to each symptom have been presented which can be used under various erosion circumstances.

NEED FOR SOFTWARE MAINTENANCE Software maintenance is a process of modifying software product after it has been released to rectify bugs, improve performance and other attributes associated with it. It is one of the most important phases of Software Development Life Cycle (SDLC) which aims to preserve the life of the software for as long as possible with minimal problems. The need for maintenance arises once the software begins to differ from its original requirements for which it was intended to fulfil. As this happens, the software also begins to differ from its original design and architecture. It is at this point when the performance of the software degrades and its efficiency decreases, thus resulting in its erosion. The continuous use of eroded software is not recommended as it will hamper the productivity. Therefore it becomes necessary to maintain software in long run. In a software lifetime, the type of maintenance activity carried out on software may differ on its nature. Some of the maintenance types are discussed below: • Adaptive maintenance – This is the type of maintenance which is carried out when the software has to deal with changing environment in which it operates. A Set of updations needs to be carried out to maintain coordination between the environment and the software. • Corrective maintenance – This is the type of maintenance which is carried out when the software which has been released is reported with bugs and errors discovered by users and testers. • Perfective maintenance – This is the type of maintenance which is done to preserve the life of the software which makes it usable over a long period of time. It addresses the new user requirements and aims to improve the performance and reliability of the software. • Preventive maintenance – This is the type of maintenance which is done to secure the future needs of software. The objective is to address those issues which are not major concern at present but may cause serious issues in future. 2.3.1 Maintenance Models – These models play an important role as most of the time maintenance experts are unaware of specifications, architecture, requirements of a software system. Traditional models are not of much use to maintenance team. Some of the models are discussed below. • Quick-fix model - Quick-fix model is a very simple model of software maintenance. It is based on ad hoc approach which first identifies the defect and then developers fix them. Here developer is waiting for the problem to occur and then necessary action is taken to fix it. The main advantage with this model is that, it takes very less time and less cost as compared to other models. This model is useful only in case of small projects. In this model all the changes are supposed to be done at source code of the project without considering for future planning. In the software industry time is the most important factor because sometimes customers need modified software as early as possible and they can’t wait for long time. In this model generally software is developed without specification or design of the projects.

    It consists of two phases:

 Build code  Fix

• Iterative Enhancement model - This model is basically designed for process model, where the entire requirements are supposed to be defined as early as possible and functionality of the final software is fully understood. This model is very useful for maintaining a software product because several cycles are conducted in iterative fashion. This model has been proposed originally for a development model but later on it is well suited for maintenance of software also. This model is basically a three phases, namely:  Analysis of software system  Classification of proposed modifications  Redesign and implementation of requested modifications In the first phase i.e. Analysis of software system, first requirements is analyzed before the maintenance process is supposed to be start. Then in the second phase, proposed modification is classified based on complexity level, technical issue. At the last phase, redesign the software and implement the modification request. One thing keeps in mind that, at the end of each phase, the documentation should be updated. The objective of this model is to reduce the complexity and try to maintain good design for the software. The documentation of each software life cycle (requirements and specification, design, coding, testing etc.) is modified at highest level. This model also supports reuse of module to other module and incorporates changes in the software totally depends on the analysis of the existing system.

FUZZY LOGIC: AN APPROACH Uncertainty is inherent and unavoidable in software development processes and products. To certify the understanding of the software product is no longer a question of “Yes” and “No”, but a question of “to what degree”. Fuzzy logic is the ideal tool to model this process. Fuzzy model is build based on the prior knowledge of software components and the history record of the components from the experts. The difference lies in the fact that in fuzzy logic in spite of assigning the subjective probabilities to the parameters, each of the input and output parameters are linguistically defined. Linguistic variables are central to fuzzy logic manipulations. The linguistic term is false when the value of the assigned variable is 0 and it is true when the assigned value is 1. Relative information is conveyed by linguistic variables used in day to day speech about an object which is under consideration. In order to understand fuzzy logic it is important to discuss fuzzy sets. A fuzzy set can be defined as a collection of elements in a universe of information where the set boundary is not clearly distinguishable, in fact it is vague and ambiguous. In the fuzzy set, the elements of universe of discourse can belong to the fuzzy set with any value between 0 and 1. This value is called the degree of membership. The degree of truthfulness is high when value of element is near 1. This is called membership function which is unique to fuzzy set, as it assigns the membership degree to each element of universe of discourse.

Fuzzy Inference System The traditional idealistic mathematical approach has been improved to accommodate partial truth by the Introduction of fuzzy set theory by Professor LotfiA.Zadeh, [20], in1965. Fuzzy logic provides a convenient way to represent Linguistic variables and subjective probability. The motivation and justification for fuzzy logic is that the linguistic characterizations are less specific than the numerical ones. Most situations in the world require crisp actions. These actions are arrived by processing fuzzy information in figure. Fuzzy logic is used to provide means of inferring the fuzzy information to produce crisp actions. Fuzzy logic provides the tools for: • Fuzzification • Inference • Composition • Defuzzification

Fuzzification - Fuzzification is the process of making the crisp quality “fuzzy”. This allows addressing uncertainty in any parameter due to imprecision, ambiguity or vagueness. In artificial intelligence, the most common way to represent human knowledge is in terms of natural language i.e. linguistic variables. Depending upon the data and uncertainty, the inputs and the output parameters are fuzzified in terms of linguistic descriptors such as high, low, medium, and small to translate them into fuzzy variables. Therefore, fuzzy sets for the inputs parameters and the required single output parameter are formulated based on the expert’s knowledge and experience in the particular domain. Linguistic descriptors like High, Medium, Low, Small, Large are assigned to a range of values for the output and for the input parameters. Since these descriptors form the basis for capturing expert input on the impact of input parameters on the number of faults, it is important to calibrate them to how they are commonly interpreted by the experts providing input. Referring to a variable as High, should evoke the same understanding among the experts.

Fuzzy sets for the inputs and required single output are formulated based on the expert’s knowledge and experience in the particular domain as per the development standards of the organization.

Membership values for the input parameters are calculated from the fuzzy sets drawn by the experts. These fuzzy sets form the basis for calculating membership values as per the specification of the individual project. Inference - Having specified the expected number of faults and its influencing parameters, the logical next step is to specify how the expected numbers of faults vary as a function of the influencing parameters. Experts provide fuzzy rules in the forms of if..then statements that relate expected number of faults to various levels of influencing parameters based on their knowledge and experience. Fuzzy processor uses linguistic rules to determine what control action should occur in response to a given set of input values. Rule evolution also referred as fuzzy inference, evaluates each rule with the inputs that were created from the process of fuzzification. Composition - The AND operator is used to combine the inputs logically to generate responses for output values for the inputs given. For each membership function the active conclusions are combined into a logical sum. A firing strength for each output membership function is computed. The fuzzy outputs for all rules are finally aggregated to one fuzzy set for various levels of consequent.

Defuzzification - The logical sums are combined in the defuzzification process to produce the crisp output. To obtain a crisp decision from fuzzy output, the fuzzy set, or the set of singletons have to be defuzzified. There are several heuristic methods, one of them is e.g. to take the centre of gravity. For the discrete case with singletons usually the mean of maximum method is used where the point with maximum singleton is chosen. Centre of gravity, calculates sum of the outputs and the corresponding membership function of the output fuzzy set and the weighted sum of the membership function. Formally, crisp value is the value located under the center of gravity of the area.

REVIEW OF LITERATURE OF SOFTWARE AND SYSTEM MAINTENANCE Hall et al. [1] explored some connections between several metrics and various demands in different maintenance fields. They proposed that software maintenance activities carry a good amount of cost with them and can occupy a major piece of share in the overall budget. The quality of software plays a major role in its integration and long term functioning. They worked more towards improving the quality of the code and kept a close eye on the ongoing research in software maintenance. They explored the possibility of assessing the quality of code, the amount of change that a software has gone through after initial releases, monitoring the quality of successive releases of software, the overall risk involved in bringing some variations to the source code, identifying that portion of the code which can be further improved or reused later, providing basis to review the changes made to the code. Basili et al. [2] put forward a theory that addresses the uncertainty in the maintenance process. Their main concern was to understand and predict the expenses of maintenance release of software systems. Their main goal was to draw out maximum efficiency from the next maintenance release in terms of quality and functionality. In their paper they documented the conclusions of a case study which established that incremental approach is a better approach to understand the distribution of effort in software maintenance release. They were successful in building predictive effort model for the same. The main difficulty which they properly dealt with was to maximise the functionality and efficiency of release bounded by resource constraints. In their paper, they talk about descriptive models of maintenance which helped to explain how effort is distributed across various maintenance releases in an efficient way. Hayes et al. [3] proposed a model to roughly estimate the efforts of humans involved across various maintenance activities. They succeeded in forming a model which estimated software maintenance efforts in person hours. The model was named as Adaptive Maintenance Effort Model (AMEffMo). It was found that various metrics such as number of lines of code changed and number of operators changed had a strong impact in the calculation of maintenance effort. Their work aimed to address the problem of estimation and cost of those projects especially where changes were made frequently. They developed a method that helps in predicting maintenance effort which varies. They also performed a study to determine metrics which were closely related to maintenance effort as measured in hours. They even used their past research experience to form a possible list of metrics that could play a major role in determining maintenance effort. After this they used their analysis to rank these metrics according to priority. Then these metrics were put to use collectively to build the model required to calculate maintenance effort. Finally they were successful in predicting two models. With the help of these two models, they showed that industry environment is more demanding as compared to university environment in terms of maintenance effort applied. Effort was also predicted correctly to a good extent by regression models which also served as good source of important information for managers and maintenance experts. However, this work also did not talk about software erosion metrics and corresponding maintenance actions. Zadeh [4] talked in detail about the fuzzy sets i.e. sets with un-sharp boundaries. These sets worked better in some cases as compared to traditional sets. He tried to take mathematical approach to study these sets. He applied union, complement, intersection, convexity etc. to study them in detail. He even proved a theorem of separation for fuzzy sets which are convex without requiring them to be disjoint. In contrast, the work of Vissagio [5] focussed particularly towards erosion metrics and maintenance actions. They put forward some of the erosion symptoms which were more common to legacy systems. Each symptom was measured by a set of metrics and the outcome of measurements suggested which maintenance actions should follow. The metrics provide a good way to keep a check on software systems and ensure that its quality does not degrade to such an extent that the most costly renewal actions then have to be performed to improve it. However, this work did not focussed on uncertainty involved in maintenance environment. S .Kumar Dubey [6] proposed a method to compare the maintainability of object-oriented software system. The inputs for the method were size, complexity, coupling and inheritance and he showed how these metrics affect the maintainability of the different software. He proposed a fuzzy based model to quantify maintainability of object-oriented software system. The model took object-oriented projects and evaluated its maintainability on the basis of some metrics. The value obtained by fuzzy model was validated by using analytical hierarchy processing technique. Ning et al. [7] made an effort to eradicate the problem of uncertainty by using fuzzy logic approach but this work ignored the recommended maintenance actions. Amrendra et al. [8] presented the idea of software maintainability. They proposed a fuzzy model for estimating software maintainability. They used some object oriented metrics like Adaptability, Complexity, Understandability, Document Quality and Readability for estimating the software maintainability using Fuzzy Logic. They also assumed some membership functions like membership function which are triangular, Trapezoidal Membership Function and Gaussian Membership function. According to them fuzzy logic based approach has several advantages over other techniques like Neural Network etc. They considered factors like Adaptability, Complexity, Understandability, Documentation Quality and Readability (RD) as inputs while maintainability was considered as output. Ghosheh et al. [9] provided study for new web application metrics used for estimating size, complexity, coupling and reusability. The metrics have been applied to two different web applications of the telecommunication domain. He attempted to provide an exploratory study for those metrics which aim to handle the maintainability of web applications. These metrics were based on Web Application Extension (WAE) for UML which would measure the following design attributes: size, complexity, coupling and reusability. His work provided subjective measurement of the metrics but does not provide an objective measurement. Hamdi et al. [10] proposed a fuzzy model for the maintainability of object-oriented software system. The inputs to the proposed model were some metrics like complexity, class, coupling, inheritance and number of children on which maintainability depends. They found the relationship between object oriented metrics and software maintainability to be very complex. Therefore, there was a good amount of scope and interest which can be put into research in developing advanced techniques which can predict the software maintainability by building suitable models. However, during maintainability prediction, uncertainty and inaccuracy not only enclosed the product quality attributes but also the association which is present between external and internal quality attributes. It is hinted that the reason for the same could be the presence of two sources of information which contributed in building the model: historical data and human experts. Therefore, they attempted to bring a solution to this by using fuzzy logic which handles inaccuracy and uncertainty to build an efficient maintainability prediction model. Castillo et al. [11] took forward the work of Vissagio by handling the uncertainty in the maintenance through the use of fuzzy rule based system which gives the best set of maintenance actions for the erosion symptoms detected. They proposed a set of erosion symptoms which help to determine the cause for the degradation of software. Set of erosion metrics were also presented which help to measure or quantify those symptoms. The use of Fuzzy Inference System (FIS) eradicated the problem of uncertainty in decision making. They established fuzzy rules with the help of maintenance experts which helped to determine the maintenance action to be followed corresponding to each erosion symptom.

Parrytomar (talk)07:56, 14 May 2017

Sentiments Analysis through computer and electronics science

Pradeep Tomar and Gurjit Kaur

Sentiment analysis is a text mining sub-category, this means that the analysis is done for finding how relevant provided text is. To find a meaning of the text is the sole purpose of mining the text. Now mining a text for sentiment orientation needs an approach to land on conclusion, sentiment analysis can mainly be done using two approaches. Namely Supervised techniques and unsupervised techniques, another approach is a semi-supervised one as used by Sindhwani et al. [10] and many other in the review below

Work Related toLexicon Based Approaches

Supervised approach as the name says, works under the guidelines and rules described for the text. The approach here is to use labeled data for analyzing the data and generating a result for the data. A labeled data can be a list of sentiment words separated according to their sentiment orientation or in simple words they are labeled according to their relevance towards sentiment, a word can possess multiple relevance it could be negative, positive, weak negative, weak positive, strong negative, strong positive etc. and a single word can be either of them and if none of them is relevant the word is termed objective.

According toGodbole et al. [11] a supervised approach can be used to find out sentiment from new, blog, article etc. and here this worktalks about a large sentiment analyzer supported with a large lexicon, lexicon here act as a labeled data, each and every word is compared with the lexicon and result is generated according to it.

Taboada, Maite, et al. [12] uses a dictionary of words having a corresponding negative and positive score, here each word is a lexicon making it a lexicon based approach. Now thethings deployed here is the use of POS tagging for the analysis process for the application. POS tagging helps to clear out words with more importance. Adjectives are sentiment words in the text, the sentiment orientation of the text can be determined by the adjectives present in the text. Like all the lexicon approaches the words have a negative and positive score, extracting the score by looking on to the adjectives extracted from the text, the sentiment tilt can be determined. Adjectives represent the sentiment in them, so the presence of more negative sentiment will make the text negative inversely the presence of more positive adjectives will make the text positive in terms of sentiment orientation.

Encountering a collection of negative and positive adjectives, then the sentiment orientation it determinedby the strong adjective, a strong positive adjective score will shadow the remaining multiple weak negative adjective score since the resultant score will be positive. Taking adjectives alone into consideration can lead to incomplete sentiment score calculation therefore below are reviews of work done where more than one part of speech is taken into scrutiny.

Benamara, Farah, et al.[13] state that adverbs and adjectives together are crucial rather than just adjectives alone. The work above shows that adverbs are important part to find the sentiment direction as adjectives alone can be incomplete. According to [14]adverbs in the text mainly support the adjective sentiment. The degree of effectiveness of adjective on sentiment is determined by adverbs used for the adjective. Adverbs in other words intensifies the adjectives and the sentiment obtained from the adjectives. Here looking at some cases the idea of taking adverbs and adjective together will become clear to you • Case 1: String: “The taste of the bread is very bad” Here “bad” is adjective and score for bad in the lexicon will be negative, but looking at the string carefully you will find that the adjective alone cannot be sufficient. The above string has a adverb “very” which acts as a intensifier for the adjective. Here adverb tells the degree to which the string is negative. • Case 2: String: “The taste of the bread is not good.” Here again “good” is adjective which shows the positive sentiment, but the actual sense of the string is negative, which can be achieved by taking adverb and adjective all together. Here adverb is “not” which is of negative score, this will ultimately shift the text toward negative direction. If considered adjectives alone the resultant sentiment score would have been incorrect. The above cases clearly explains why using collection of adverbs and verbs is of great importance.

Moving on to the other framework developed having more new directions to find accurate sentiment. V. S. Subrahmanian and D. Reforgiato[14] andNasukawa, Tetsuya, and Jeonghee Yi [15] discusses about a new factor for sentiment analysis and i.e. deploying a feature extraction unit to extract the features from the text, now feature here means is the action or characteristics for which the text possess some sentiment. Abbasi et al. [16] the orientation of a particular feature in terms of polarity is crucial, after all the overall location of polarity in text depends upon what has been said?about what feature? The same approach has been tried in Tan, Songbo, and Jin Zhang[17] andFeldman, Ronen [18]. Feature extraction is important because the orientation of feature will tell us the final polarity to be generated after attaching a feature with correspondingadjective. When performing analysis on a domain specific scenario every feature has a different impact on the overall polarity of the subject for which the review has been issued. Every feature acts different for every object. This can be explained like when talking about a object say a mobile phone, now for feature “Battery life” and “Computation time” the sentiment word will have different effect on the object, if both the feature are assigned adjective “long”, which in lexicon dictionary is positive but when computed keeping the feature in mind the sentiment is not what expected of the adjective. For “battery life” the adjective will behave positively because it is positive feature and more the value of adjective for this feature more is the final sentiment towards positive location but for “Computation time” adjective will have a negative impact on the final sentiment of the text because it will degrade the quality of our objects feature. This is hidden semantics in our text which could not have been identified if feature extraction was not considered. Features for any object will be the verbs associated with it. InTheresa Wilson [22] the feature are extracted from the twitter tweets by hashtags. Hashtags act as the feature about the text been tweeted. Now having discussed theimportance of feature in sentiment analysis but extracting feature is still a challenge which has been done by Yi et al.[19] and Arab Salem et al.[20].

Extracting feature can be easily performed using the natural language processing approach. A natural language processor requires a tree structure training data to identify words and their corresponding POS. InSantorini, Beatrice [21] application of POS tagging is shown, the authors used Penn Treebank which is used as training data for the NLP processor.

The review above desquamate the work done in the desirable direction, extension in techniques and approaches provides accuracy to the analyzer. The work done in Patrick Paroubek et al.[23] the framework for analyzing the microblogging data. Today the percentage of active internet user is much more than before, this mass of active users frequently post their reviews on products. This review data has to be analyzed for opinion mining of the text, reason to take text as our input is because sentiment analysis comes under text mining, and the approaches defined are suitable for text only. 2.3.2 Work Relatedto Different Levels

According to authors ofLiu, Bing [24] the sentiment analysis of the corpus can be done in multiple approaches, the approach here is dividing the process into different levels. The classification of process by levels can be determined by the size of test text. The different levels of are • Document Level: As the names says document level is when the size of the text under analysis is of multiple lines. The document analysis is done to determine the overall sentiment of whole document, document level sentiment analysis gives the sentiment of the document in a shared format. The result will have the measurement of both positive and negative polarity, the final result will depict that the document is x percent positive and y percent negative. The constraints of using document level sentiment analysis is that the document should have same context of target. Authors ofZhang et al. [25] propose a framework for analyzing sentiment for Chinese text. There is was no other framework for analyzing text of language other than English. Here the text is taken and distributed into sentences and every sentence is converted to English and then reassembling as a document for sentiment analysis. This approach is reviewed by Feldman et al.[26] which lectures about various sentiment analysis techniques.

The framework described in Yessenalina et al.[27] analyses movie reviews, and analyzing them at document level. It analyzes debates also and show quite desirable results in terms of accuracy, for document level analysis the above work deploys a module for extracting hidden meaning from the sentence which will eventually make the work of analyzer less hectic as for now the whole document will not be parsed. The extraction is done by taking some sentence from the document with more relevance. • Sentence or phrase level sentiment analysis : Phrase level sentiment analysis is done on sentences or phrases extracted from the whole corpus or the document. A phrase level analyzer computes the sentiment of a particular sentence and returns it. Advantage of using sentence level analysis is that it doesn’t need to arrange the documents respective to their context, a sentence can individually have a sentiment about a context and since every sentence is taken separately the context can betaken into consideration as now there will be no mixture of contexts as in case of whole document analysis.

InWilson et al. [28] the exact framework for performing sentiment analysis at sentence or phrase level is presented, the framework here has two phases for analysis both in order respectively. First phase of the process is to scrutinize every sentence or phrase for polarity detection if the sentence possess any polarity other than neutral it is sent to next phase for analysis. The next phase then calculates the intensity of polarity i.e. knowing that the sentence has some sentiment but how strong or how weak is the sentiment, that is determined by the analyzer using a sentiment lexicon having negative and positive words labelled accordingly for the use of analysis, making it a supervised analysis.

InArun et al. [29] the sentence level analysis is observed from a different perspective. According to the study done by authors of above work is that every sentence can still be modularized, it can have many more part which may be dependent on each other and together they may not be able to express the sentiment correctly. They deploy a new concept of finding conjunction in the sentence and breaking up the sentence intomany parts and analyzing every part individually for sentiment analysis. Breaking the sentences further more can give rise to the risk to lose the semantics of the sentence, therefore to preserve the semantics they have converted the sentence into tree structure representation, as in tree representation the rules remain unchanged of the sentences semantics also the analysis is done at each hierarchical manner making the breaking of sentence and computing the sentiment very easy. They achieved 80 percent accuracy by deploying this framework. The framework is linguistically correct as it has been tested against the WordNet bag of words representation.

AuthorsWilson et al.[30] have worked down a framework for analyzing phrase level text for sentiment orientation. As told by the authors a sentence can have multiple context towards many directions, some direction may not be of use to the analysis process. Some words which are positive may imply negative meaning for a feature and a feature is associated with a context. The type of context to look out for in sentence level sentiment analysis is relevant and irrelevant context, the relevant context is further divided into two types i.e. prior polarity and contextual polarity. Prior polarity is necessary to determine as the actual contextual polarity depends upon the result of prior one. The result of the above study shows that the neutral sentiment present in the text will impair the quality of the features. Making it again a step towards more accurate lexicon base sentence level sentiment analysis.

Kaji et al. [31] plan to build the lexicon by themselves and not using pre-built lexicons and dictionaries, making your own lexicon could turn out to be a time and effort draining process none also the lexicon made should be able to cover all the work. In a way the lexicon should be able to envelope all the words in the text. To build a lexicon of such quality the authors developed a framework for extracting sentiment semantics and rules, when arranging these rules and clues again will result a complete document with sentiment hence using this approach can be helpful, similar to other this work also suggest using sentence level sentiment analysis for the process. The other unique feature about the above work is that it proposes to analyze text in Japanese language. To prepare a corpus for extracting clues and sentiment feature the framework uses large repository of web pages having all relevant information about the topic. The need to using such a technique is that it saves you time and effort when building your own lexicon. Also the clues are extracted out which makes it easierfor the recognizer to recall data. This again helps us to achieve a very high accuracy, which is the sole purpose of extending work in this field of interest. Also the work above helps us to determine the objectivity of the text, objectivity can be tricky at times to determine, a work canact objective and also subjective when under some influence of the verb performed by the subject. • Aspect or feature level analysis : Aspect level analysis is when one don’t care about the size of text taken or the belonging of the text but the summary of the text in terms of entities, object, aspect etc. which are the components to target while writing a review of the subject. Imagine a collection of vectors which forms a document, since vectors are made up of direction and value, here direction being the polarity of the vector and values being the effectiveness of the sentiment. Collection of vectors can be seen as collection of words and collection of words can be seen as a document or sentence. As explained above here the size of collection of vector doesn’t matter a lot what matters is the vectors which actually means something and what effect these vector have on final sentiment quantity when the process is completed. Aspect or entity is the target for which the sentiment was released, an aspect can be of negative effect also and of positive effect anything which doesn’t have any relevance with the aspect should be ignored in aspect level sentiment analysis. Below are some work done under this category of analysis.

Jochen et al. [32] suggest how feature extraction actually looks like. The summary of the following work is to demonstrate the use of feature extraction and the application of feature extraction and desirable effects of it. As seen in the above titled topic the authors exhibits the advantages of extracting feature or entities from the text, since sentiment analysis is a field under text mining therefore to parse extract the aspects to survey some work done using text mining approaches as done in above work. This work was done by researcher at IBM, it main aim of the miner here was to extract out the important aspects of the document. The vision here is to make people see what they usually miss in the document while going through it. The framework is successful to reduce small aspects from a mountain of text data. The data here can be Emails, insurance\policy claims document\contracts, complaints forms from customers, content of rivals etc. The work is acknowledged in treating all the above cases, the vision here was to shift the work in document less environment where reading every document was not required instead you can read the document for feature extraction and see the important matter of interest discussed, trapped information in forms, email etc. can be read.

The work done by Feldman et al. [33] is somewhat same, they just took fully loaded database instead of emails and forms. They call this “Knowledge Discovery in Databases”, this is also a kind approach as it takes large amount of data for analysis, and the size of data they took for testing their framework is 52,000 documents. Arranging the terms in hierarchical manner is also, meaning most important terms and entities at top then less important below it.

Cohen et al. [34] discovered a crisis is application related to bioinformatics, a framework for analyzing textual reports, auxiliary lab reports, records etc. for information summarization, an application for analyzing these kinds of data and generating some kind of output as interpreted from reading the text. Here now easily the communication gap between the disciplines for bioinformatics can be reduced. The reports of specimens, experiment are to be read completely for understanding the summary, the framework here using aspect level analysis which again reduces the traversing of whole document and gives the output in forms of feature and aspects which are comparably much easy to handle. They used MEDLINE database which contain 24 million records, quite a large database to handle and this framework analyses this database for feature estimation.

Michael et al. [35]the authors performed aspect level analysis and for this it converts all the text in form of features and entities, to deal with vectors an algorithm is required which can evaluate using vector values. Vector can be analyzed using a algorithm which can separate unsupported vectors and keep only supported ones, for which the authors used Support Vector Machine, all you need to do is train the SVM classifier can the classified data will be the output of the classifier. The analysis done here can be of high quality yielding, the classifier can classify the text even on the hidden language or linguistic constraints which are usually not encountered by any analyzer. Here to test the framework the t took the feedbacks they receive from online forums of companies where the text can contains many twisted turn and points hence using SVM for classification unburdens the work to a lot extent, this work also falls under the dimension reduction domain because SVM in general performs dimension reduction.

Brendan et al. [36] performed sentiment analysis on the twitter data, the featured data used was the election comments during the presidential election in United States during the year 2008-09, they tried to predict the tilt of the public polls for any leader using twitter tweets. The approach was totally dependent upon correlation of tweets for leaders, according to the abstract of this work the authors tried to substitute the existing poll predictor with this automated approach. The system was build using the text mining principles and methods. The feature extraction was used to determine the demands the people were showing from the tweets. Tags such as “jobs”, “economy” etc. were for the consumer pushover. Handling a political debate or election release data can be challenging as the level of complexity is different from that of feedback as the target features can be predicted in them but predicting them in here can be challenging and time consuming. The opinion mining from this kind of domain can return features which cannot be predicted before and shall possess a great threat.

Parrytomar (talk)07:48, 14 May 2017

Recommendation Systems

Pradeep Tomar and Gurjit Kaur

In Wang et. al [11], an analysis of Quora’s features is carried out. The relation between various entities in the Question & Answer system has been studied. Three different types of analyses have been carried out through the graph -between topics and users, among various users and related questions. The user and the social graphs help in activity and the relatedness of the questions. Their study tries to answer several questions such as the role of traditional topics, role of super users in driving users’ attention towards questions of non- related questions and how better questions are filtered. Their study finds that the views and answers of a question do have a role to play in the relevance of a question. Comparison has been made between Quora and stack overflow. Quora being an integrated social- networking site has more involvement graphs and patterns. A BFS search based crawler has been used to extract more than four lakh questions, 56000 unique topics and more than 2 lakh users. In the user- topic graph, out of total extracted data it has been found that 95% users have at least one topic common as they have to select a topic during sign- up process. Around 27% of the users follow around 1000 topics and these users happen to be intellectual. The user- topic graph tells that the topics of the questions and the views associated to it attract more users towards the question. The second graph studied is social graph between the users. The follower fitting distribution of Quora has been found better than Facebook. Their study finds that people attract followers by contributing high- quality answers by clustering the followees. There may be a bias in the up votes as a super user who has more followers gets more up votes even when the quality of the answer may not be the best among others. The third kind of graph used for the analysis is related- questions graph. Questions are denoted as nodes and edges as similarity measure. Their study finds that the question relation is stable over some time by taking two snapshots within a gap of two months. Questions with more related questions attract more users. Maity et. al [12] analyze and attempt to predict the popularity of topics under which the questions are categorized. A framework has been designed to predict and the popularity of topics and Latent Dirichlet Analysis topic modeling approach has been used to categorize the question text into topics. The dataset collected spans over four years and consists of 822,040 questions across 80,253 topics and 1,833,125 answers. It has been found that the topics that are stable do not have much varying questions with time. K- Shell analysis has been performed for determining inter- topic dynamics. Context feature, content feature and the user feature have been considered for learning about topic popularity. N- gram modeling has been used for analyzing the corpus. Salton et. al [13] introduced TF- IDF the very first time. They analyze the text collection as a vector with each term consisting of a frequency value. The term weighting in Boolean query system is also discussed. Common class assignment is also discussed for keyword based clustering of the documents for better representation of the knowledge. The observations in their study are useful today as TF-IDF effectively can describe the relevance of a term in document. Thada et. al [14] compare Jaccard, Dice and Cosine similarity coefficient to find the best fitness value using the genetic algorithm. The documents have been retrieved from Google using search query and matched for similarity. The Jaccard, dice and Cosine similarity coefficients are used and selection, mutation and crossover operators are applied. Roulette function has been used after every generation. The ten queries used are “Anna hazare anti-corruption”, “Osama bin laden killed”, “Mouse Disney movie”, “Stock market mutual fund”, “Fiber optic technology information”, “Britney spear music mp3”, “Health medicine medical disease”, “Artificial intelligence neural network”, “Sql server music database” and “Khap panchayat honour killing”. The spelling and syntax of these queries may not be grammatically correct but that is how they have been given in their study. The crossover probability taken is 0.7, probability of mutation is 0.01 and the number of iterations is 150. The results indicate that the performance of cosine similarity is better than dice and Jaccard similarity. Das et. al [2] discuss the methods used in Google news recommendation. The three methods are PLSI, co- visitation counts and MinHash algorithm. The MinHash using Map- Reduce model has been discussed. Using PLSI, the user and item modeling has been discussed by taking joint distribution between the users and the items. For PLSI, their study makes use of map- reduce expectation- maximization. To understand the co- visitation, the graph pattern is studied where nearest items are those that have co- visitation by a user. The recommendation system has been evaluated on a live traffic. Abel et. al [15] analyse the content based cross-domain recommendation systems based on collective model of recommendation. The characteristics of tag based profiles for social networking based systems have been analyzed. Their evaluation shows that tag based user modeling is better than other methods and it helps in solving cold start and sparsity problems. Tag based personomy has been defined for a user which includes tags, resources and tag assignments respectively. Kaminkas et. al [16] proposed a location based music recommendation method that is based on tags. The context considered is place of interest. This is a collective method based content recommendation system. Their study states that an emotion is attached with a user listening to a particular kind of music at a particular location. The location is taken as a cross-domain source here. Each tag has a usage probability for the category it falls in. Li et. al [17] propose a collaborative filtering model for sparsity reduction. This is an adaptive method based model. The user item rating patterns have been transferred into a sparse rating matrix in a targeted domain. Cluster level rating pattern has been used. Their study states that the users and the items can be non- identical or non- overlapping. In their study, the transferred knowledge is termed ‘codebook’. Their study states that “Codebook is a (k x l) matrix which compresses the cluster-level user-item rating patterns of k user clusters and l item clusters in the original rating matrix”. Li et. al [18] propose a transfer rating matrix generative model for cross domain recommendation system. The model is for collaborative filtering based recommendation and is collective model based. Cluster level rating matrix has been used and user item joint mixture model has been focused. Their study states that “the advantage of rating matrix generative model is that it can share the useful knowledge by pooling the data from multiple tasks”. Pal et. al [19] have carry out a temporal study of experts in Question and Answer communities. Unsupervised machine learning methods have been used to study the behavioral patterns that distinguish one expert from the other. Stack Overflow dataset has been used and point-wise ratio of the best answer time series with the answer time series has been used as temporal series analysis. The results of the work carried out are that experts’ probability of answering questions increases with time which is in contrast to the ordinary users. The temporal analysis shows that as an expert gains reputation, ordinary users acknowledge her and that leads to less participation by the ordinary users. Abe et. al [20] study the reinforcement learning for two conditions. The linear function which relates to a feature vector for every action has been considered. Two cases have been studied, one in which the unknown linear function is applied and continuous-value reward is given and the other in which probability of obtaining larger binary-value reward is obtained. Auer et. al [21] show that the optimal logarithmic regret is achievable over time. The UCB1 policy is discussed and it has been shown that UCB1 can achieve logarithmic regret uniformly through the use of various theorems. The MinHash technique was first introduced by Broder et. al [22] in 1997. In their study, the minhash function has been described as the function that generates random permutations. The shingling is discussed for representing documents. Two documents sets resemble each other if their intersection divided by union is nearly equal to 1. Three important theorems have been discussed regarding min- wise hash functions. Theorem 1 states that “for a min- wise independent function F, |F| is at least as large as the LCM of 1, 2,.., n and hence |F|= en-o(n)”. Theorem 2 states that “there exists a min- wise independent family F of size less than 4n. Theorem 3 states that “there is a family F of size at most n2n-1, such that F with an associated μ is min- wise independent. These three theorems establish a remarkable useful application of min hash function. Andrei Z. Broder [23], discuss that it is not necessary to compare whole documents for resemblance. Random sampling can be used for comparing the relative size of intersection of documents. Documents can be broken into “sketches” or “shingles” which represent the document. His study states that clustering m documents into closely resembling ones can be done in time proportional to mlogm. The Jaccard similarity can be estimated by finding the similarity of min hash functions which randomly permute the sets being compared. Andrei Z. Broder in [24], discuss that two documents can be compared for resemblance and containment. The resemblance and containment means “roughly the same” and “roughly contained” respectively. The resemblance function returns a value [0, 1] where 1 means total similarity and 0 means no similarity. The containment function describes if a document is contained in another. Its value is also in the range [0, 1]. The fingerprint comparison can also be done as the document and the comparison is done by breaking into shingles. Li et. al [25], discuss the use of offline evaluation of news articles using contextual bandit algorithms. The contextual bandit algorithms involve the context of a news. The approach is data- driven rather than simulator- driven and a replay methodology is used. The exploration- exploitation problem helps in figuring out good quality news. A feature vector is associated with every arm which is known as a context vector. The exploration has been used to choose sub- optimal arms so as to gather more information about them. Son et. al [1] discuss news article recommendation using location. Explicit localized semantic analysis is used for topic representation of documents. Geographical topic modeling is about associating topics with geographic locations. The geographic topics and regions are used as the two latent variables. A score function is used which measures the appropriateness of a news article to a particular location. The topic space is defined using the Wikipedia concepts. Chou et. al [26], propose an approach in which the news article with the highest score with each user visit is recommended during exploitation phase and during the exploration phase, exploration is done for articles with high reward possibility with uncertainty measures. This work is for ICML 2012 Exploration and Exploitation Challenge. National Taiwan University won the first phase of this challenge and the paper summarizes their solution. In their study, a scoring model is used which estimates click through rate of an article gathered over time and also measures the discussed system’s uncertainty estimation.

Parrytomar (talk)07:43, 14 May 2017

Optical Switches and Communication

Gurjit Kaur and Pradeep Tomar

INTRODUCTION The optical network such as Wavelength Division Multiplexing which is also referred as wavelength cross connect which have capability to combine different optical signals into one optical fiber link at various wavelengths [1]. In recent years, it is conclude that by using various materials having high electro-optic effect in optical device will be appropriate for WDM network. These materials have special characteristics which can optimize the switching voltage and other important parameters in optical devices hence reduce the price of any network and will provide a reliable system. This kind of optical switching network can be used for many commercial purposes [2]. Many Cross Connect (OXC) switching devices have been proposed using 2x2 switches which are organized in a different multi-stage network which can be a good candidate in higher order switches [3]. As these switching device work as splitter and coupler can transfer signal from one input port to any other output port [4].Most of the switching operations are required to have high switching ability and more speed to make system more reliable [5,6].

It is known that on the basis of Mach-Zehnder interferometer, different 2x2 switches have been proposed [8]. In last past years, it seems that many designers have given different studies on the basis of different switch design. One of the major benefit of optical switching network is that it does not require any conversion of electrical signal data to optical signal data due to which routing of optical signal is become easier as it does not depend on data rate and data protocols. There are many other features of optical switching functions instead of electrical function as it will reduce the system components and hence increase the speed of system operations. Another parameters which will make network more reliable which is throughput also get increase and the operating power will reduce accordingly [5]. There is one more thing which we have to look after is effective cost of overall system which will reduce as we go with the optical networks. Now-a-days, backbone of any network is usually optical. Hence, it is required to improve and establish more optical networks. For improvement of any optical network, new hardware are being designed and various protocols have been studied to make the transmission more reliable and quicker. In most of the optical network, they are using electronic components which does not able to compensate with the speed of optical network due to which bandwidth get waste as they are able to use only small amount of bandwidth due to these components. Optical amplifier are the devices which amplify different wavelength simultaneously because of that the requirement of OEO regenerators get reduced. The main advantage of optical cross connect is that they does not require the conversion of electrical data signal to optical data signal as it switch signal optically. One more device is available and useful in large network which are Optical Add-Drop Multiplexers (OADM). OADM are simply use for adding or dropping any particular data signal from optical fiber channel [5,24]. Many solution have been proposed to establish optical light path. One of them is Optical Burst Switching(OBS) [24] which is capable of establishing the light path for a small interval of time sat for milliseconds. At the sender, it does not require to establish the data path utterly to begin the transmission of optical signal. In the optical switching network, the nodes have ability to buffer data during transmission if the required data is not switched already. Another proposed solution available is Optical Packet Switching (OPS). It is simply used to switch the user data optically. Optical packet switching is more faster and cheaper switching process to use as compared to traditional switching operation as it require less space to establish, low power and dissipates less amount of heat during transmission. Hence, by using OPS we can make our transmission more cheaper to maintain and much more reliable. Fig 1.1 shows the basic layout design of OPS [24]. And the third solution which has proposed is Generalized Multiprotocol Label switching(GMPL) is basically a switching operation such as network and space switching for time and wavelength as well as packet switching.

1.2 APPLICATIONS OF SWITCHES 1) Optical Cross-Connects: Optical switches are mainly used in optical cross connects to provide equipments to set up the light path or data path which may or may not be static it depends on the network. To reconfigure any optical network, OXCs play an important role in it. The optical switches are used in the OXCs to reconfigure and hence transmit new signal path in any network. Cross connects are usually handle large amount of wavelengths at a particular location such as hub or server. To route any optical signal or carrier, OXCs are the main element in the network topology as it has capability to optimize the transmission of signal path [17]. Optical cross connects can be very effective in large networks as it provide certain key features such as:

a) Service Provisioning: In large network, it is important to make the transmission automated so that when a time to deal with large amount of wavelengths at particular location should be error free and easy to implement without need of much expense. In this scenario, OXCs have capability to transmit data path in an automated way to avoid any further failure in the network.

b) Protection: To avoid any component failure and cut in the fiber link, cross connects are used to detect the failure and to find new route the data path quickly through it. This is one major requirement in any network so that it can avoid further loss of data transmit through path because of that failure which occurred in the fiber path of the optical network protection is one major key feature of cross connects require in any network.

c) Monitoring and loopback: Optical cross connect provide another function which is monitoring of optical data signal during transmission over data path. And it also allow a loopback function in which a optical signal can get back to the same node from where it had start in the beginning at the intermediate node.

d) Scalability : Cross connects provide large amount of scalability require in any optical networks. e) Wavelength conversions: With so many features like transmit signal from one input port to any other output port, it also have a capability of wavelength conversion.

The major benefits of using optical cross connects in any optical network is that they have ability to switch the optical data signal with high reliability, having low losses in output power and exhibit great uniformity of data signal as it does not depend on path length. Another important feature is that it does not interrupt any other optical signal path as it has capability to switch the data signal at desired optical data path. Recently in many switching network, cross connects are employing as an electrical core for switching operation in which the optical data signal first converted into electrical signal and then switched by using electrical devices and will again converted back into optical signal. This type of switching feature lead to many drawbacks. First, it require conversion at different stage due to which the cost of overall system will increases and also require extra equipments for conversion which also make system more complex and require more maintenance as well. Second, the speed of electronic switching devices can not able to compensate with the optical switching devices which lead to a major issue in transmission of any data signal [18].

2) Network Provisioning: To establish a new optical data route or need to modify the existing route in any network it require network provisioning. It is comparatively have capability to switch quickly which will require in reconfiguration for the desired request in few seconds. On the other hand, the manual process is very sluggish in reconfiguration of any path as it takes few weeks or maybe more. To increase the network flexibility, it require a high capability configuration switches so that they can respond promptly and in an automated manner to any particular services required in the network [18]. 3) Protection Switching: The main function of protection switching is that it conclude the nature and the origin of failure in any network so that it can rectify the issue occurred in the network as well as will inform the nearby nodes to circulate the problem occurred in the network to other nodes. A basic protection switching. The operation performed by protection switching effective but it slow down the speed of switching operation in the network as compared with the optical switching. A Protection switching require smaller port optical switches of 1x2 or 2x2 [19].

4) Optical Add/Drop Multiplexing: The basic requirement of optical add-drop multiplexers in any network is that we can easily add or drop any wavelength(optical signal) to or from optical transmission path through the use of OADM in a different wavelength channel. It allow us adding or dropping optical signal without any requirement of electronic processing. This feature of OADM provide a cost effective transmission in large optical networks as it handles the coming traffic accordingly [19].

1.3 OPTICAL SWITCH FABRICS The use of all-optical switching fiber is very beneficial in any transmission to switching as it expand the attempt of switching operation in optical network. One major advantage of all-optical switching devices is that they provide direct switching in optical domain, hence does not require any conversion of O/E/O for the switching operation which make network more reliable and also increase the speed of switching operation. Many all optic switching fabrics are being studied and some are still under research. Most of the applications which provide for these devices that are feasible enough, as they would have one or more results to be considered. So before going through the main specification of optical fabric technology available today, it first need to go through with some parameters on which we analyzing these optical switch. There is one parameter that is switching time which is very significant parameter for any application and is different for different applications. Hence, listed some other essential parameters of a switch are as follows [8]. 1) Insertion loss: It is the ratio of signal power which is lost due to the switch. This loss is generally measured in decibels and it should be as small as possible. In addition, the insertion loss of a switch must be same for the connections of all input–output port (loss uniformity). 2) Crosstalk: This is the fraction of the power at a particular output from the required input to the all other inputs power. 3) Extinction ratio (ON–OFF switches): It is the proportion of the output power in the on condition to the output power in the off condition. This fraction must be as large as possible. 4) Polarization-dependent loss (PDL): For both states of polarization of the optical signal if the loss of the switch is not equal, the switch is said to have polarization-dependent loss. It is required that optical switches should have low PDL. Some of the other parameters that are taken into consideration include reliability, temperature resistance, energy usage and scalability. The term scalability can be define as the ability to design switches with large number of port counts that perform sufficiently. It is a specifically significant concern [7].

1.4 OPTICAL SWITCHING TECHNOLOGIES 1) Opto-mechanical Switches: Optomechanical technology was the first commercially available mechanics for optical switching. A basic design is shown in Fir 1.4. The switching function is implemented by some mechanical means in optomechanical switches. The mechanical means used in optomechanical switches include directional couplers, prisms and mirrors. These mechanical switches consist high extinction ratio, low crosstalk, less fabrication cost low insertion losses and low polarization-dependent loss. These switches have switching speeds in the order of a few milliseconds, which may not be suited for many types of applications and can be adjust according to the specific requirement. They have a disadvantage which is the lack of scalability [6].

And most of the mechanical components require long-term reliability of some concern. Opto-mechanical switch constructions are restricted to 1X2 and 2X2 port sizes. Larger port counts can only be acquired by combining several 1X2 and 2X2 switches, but this may increases cost and downgrades performance of the switch. The opto-mechanical switches are generally used in fiber protection and very less number of port count wavelength add/drop applications [18].

2) Microelectromechanical System Devices: Although Micro Electro Mechanical system (MEMS) devices have been contemplate as a one of the category of opto-mechanical switches, they are proposed distinctly, especially because of telecommunications industry has shown extensive interest in them, but also because of they have perform much better than as compared with other optomechanical switches [18]. MEMS switches uses small reflective surfaces so as to redirect the light beams to a particular port at of neighboring reflective surfaces by either deflecting the light off to a port or by guiding the light beam straight to a port [19].

There are two MEMS approaches for optical switching: two-dimensional (2-D), or digital, and three–dimensional (3–D) or analog, MEMS. In 2-D MEMS, the switches are in digital form, as the position of mirror is bi-stable (ON or OFF), through which switch driving become very straightforward. A top view of a 2-D MEMS device having microscopic mirrors, which arranged in a crossbar configuration to acquire cross-connect functionality. Collimated light beams are propagate parallel to the crystal(substrate) plane.[20] When a mirror is operated, it will proceed towards the path of the beam and passes the light to one of the outputs, as it makes a 45 degree angle along the beam. This alignment also permits light to be passed along the matrix without striking a mirror. This additional operations can be used for inserting or deleting (dropping) optical channels (wavelengths). The settlement for the clearness of the mirror control in a 2-D MEMS switch can be optical loss.[5]

3) Electro-optic Switches: A 2x2 optoelectronic switch utilize a directional coupler whose coupling ratio can be altered in the coupling region by varying the refractive index of the material. Lithium niobate LiNbO3 is one of the common material used in designing switches . switch is constructed on a lithium niobate waveguide. The change of substrate’s index of refraction occurs when an electrical voltage applied to the electrodes. The change in the index of refraction distorts the light along the suitable waveguide path to the particular port [17]. The electro-optic switches have capability of changing its state eminently quickly, usually in less than a nanosecond. The limit of switching time is determined with the capacitance of the electrode configuration. Electro-optic switches are also reliable for any system, but they have to pay the cost of high insertion loss and possibly they are polarization dependence. At the cost of a higher driving voltage, switch can be polarization independent, which may limits the switching speed of the operation. Larger switches can be designed by integrating many 2X2 switches on a single substrate. However, they could have a comparatively high insertion losses and PDL and are much more expensive than mechanical switches [20].

4) Thermo-optic Switches: The functionality of these switches are based on the thermo-optic effect. It exhibits by varying the refractive index of a dielectric material, as there is temperature variation in the material itself. The thermo-optic switches are of two types categories as: interferometric switches and digital optical switches. a) Interferometric switches: are mainly based on Mach–Zehnder interferometers. the devices comprise of a 3-dB coupler in which the signal will split into two beams, which then passes along two individual arms of same length, and the second 3-dB coupler, will merges the two splitted beams and finally splits the signal again. By heating one arm of the interferometer causes changes in its refractive index. Accordingly, arm is experienced a variation of the optical path of that. By heating one arm of the interferometer, it is then possible to alter the phase difference between the two light beams. Hence, as interference can be constructive or destructive, hence the power on alternate port can be minimized or maximized. Then output port will be selected [8].

b) Digital optical switches: are generally made of silica on silicon and these are integrated optical devices. The switch is made up of two interacting waveguide arms along which light propagates. To determines the output port, its required to calculate the phase error between the beams at the two arms. The refractive index of one of the arms get changes by heating, and the light is transmitted through one path in place of the other. a 2x2 digital optical switch. Thermo-optical switches are usually small in size but have some fault as it require high-driving-power characteristics and have difficulty in optical performance. This technology have some drawback which include high-power dissipation and limited integration density (large die area). Many commercially available thermo-optic switches entail forced air cooling for having reliable operation [7]. Some of the optical performance parameters, such as crosstalk and insertion loss, may not be acceptable for some of these applications. On the other side, this technology have some benefits it allows the variable optical attenuators design integration and also allow wavelength selective elements (arrayed waveguide gratings) design on the same chip with the same technology [18,7].

5) Liquid-Crystal Switches: The liquid-crystal state is having a phase that is arranged by a large number of organic materials with certain temperature ranges. the liquid-crystal phase, because of their permanent electrical dipole moment, molecules can accept a certain mean relative orientation. It can be possible, by applying a desirable voltage across a cell filled with liquid-crystal material, to act on the orientation of the molecules. Therefore, optical properties of the material can be modify. Liquid-crystal optical switches are depend on the change of state of polarization of incident light with a liquid crystal having the application of an electric field over the liquid crystal [8,18].

Parrytomar (talk)07:12, 14 May 2017

Optical Waveguide and IoT

Gurjit Kaur and Pradeep Tomar

Introduction about Optical Waveguide The propagation of an electricfield through a waveguide can be intuitively understood by the use of the ray opticsmodel described by Snell’s Law :

Snell’s Law relates the incident angle θ1 of light in a medium with index n1, impinging on the interface of a material with index n2, to the resulting angle θ2 that the light is refracted to when it enters the new medium.A diagram of the physical representation is shown below :

If n1, n2, are correctly chosen then the angle can become 90°, a conditiontermed as Total Internal Reflection(TIR) occurs, where the incident light impinging on the interface is reflected back into the starting medium. The angle of incidence at which this condition occurs is called the critical angle and is calculated to be θc = arcsin(n2 / n1). A waveguiding structure works in such a way that each interface reflection occurs at an angle larger then critical angle as a result the light ray will theoretically continue in the core region indefinitely. . The simplest structure that can be understood by this method is the slab waveguide. This 2D waveguide consists of a high index material sandwiched between two low index materials. If light is injected into the edge of this structure within the acceptance angle of the waveguide, it will be confined to the high index region. A very common example of the 2D waveguide is the rib structure waveguide.

For the analysis of waveguide devices the basic equations from which all the basic conditions and solutions are derived are the Maxwell’s Equations which are given below: By default , ρ and J are considered to be zero. ∇ ⃗×εE ⃗=ρ ∇ ⃗×E ⃗=-d/dt μH ⃗ ∇ ⃗×H ⃗=-d/dt εH ⃗+J ⃗ ∇ ⃗×μE ⃗=0 Solving Maxwell curl equations:

Consequently the equations simplify as :

By solving these equations we can acquire transverse components of E and H. Upon solving we get :

2. Silicon Waveguides Silicon as a substrate is used primarily for its electrical properties. As a semiconductor it can be doped with a wide variety of impurities, such as boron and phosphorus, to accurately control its electrical characteristics. Silicon is unique from many other semiconductors in that it has a natural oxide (SiO2), that is adhering, an excellent electrical insulator as well as diffusion barrier, and highly selective to etching. These SOIsubstrates provide several advantages in micro-optical systems, primarily as a result ofthe large index contrast between Si (n=3.45) and SiO2 (n=1.46)the core of the waveguide is fabricated out of the thin silicon top layer, and theunderlying oxide is used as a cladding. This configuration provides an high index difference between the core and the substrate. These optical properties of silicon and its native oxide allow for light tobe confined at the material interface by total internal reflection (TIR). Because the light is so highly confined, single mode waveguides can have core crosssectionwith dimensions of only a few hundred nanometers and bending radii of a fewmicrometers with minimal losses. Because field leakage into the substrate andsurrounding cladding is so low, these waveguides can be fabricated closer togetherwithout coupling occurring between them. The high index of silicon also allows devicesto be shorter. SOI waveguides are so small they are commonly referred to as nanophotonicwires. The high thermal conductivity of silicon allows for dense integration as heat generated by devices can be easily dissipated .SOI technology is allowing for the miniaturization of these photonic structures on an order of ten to ten thousand leading to ultra dense integration. Another advantage of silicon is that it is optically transparent at long haul communication wavelengths, between 1.3μm and 1.7μm . This allows SOI waveguides as well as other nano-photonic devices fabricated on this platform to be easily integrated into existing silica based fiber optic networks. A few undesirable properties of silicon that limit the degree of integration and level of performance of photonic components. These problems include the several poor optical properties of silicon, as well as waveguide sensitivity to losses. One major disadvantage of silicon is that it doesn’t exhibit the first order electro-optic effect or Pockel’s effect as III-V semiconductors do. Silicon waveguides have several loss mechanisms that contribute to losses in the waveguide. These methods include absorption, scattering from volumetric refractive index inhomogeneity, coupling of guided modes to substrate modes, and interface induced scattering .Additional losses not associated with propagation occur during the coupling of light in and out of the device. Other losses solely caused by waveguide structure, such as bends, can occur as well . Waveguide losses are typically quantified in terms of dB/cm.

3. ARROW WAVEGUIDES ANTIRESONANT Reflecting Optical Waveguides (ARROW) are integrated waveguides in which guided field is confined by antiresonant Fabry-Perot reflections rather than total internal reflection (TIR). The heart of the Fabry–Pérot interferometer is a pair of partially reflective glass optical flats spaced micrometers to centimeters apart, with the reflective surfaces facing each least at one of the faces which is usually the substrate cladding . This fact implies some power leakage of antiresonant modes into the substrate although losses may be reasonably low with a convenient design of the structure. The (ARROW) waveguide has a silicon substrate and is multilayer waveguide where light is confined within the core by an anti-resonant reflection, with a very high reflection coefficient, at the two interference cladding layers underneath the core. Antiresonant waveguides fabricated using the advantages of silicon technology have attracted a great interest lately because they provide single mode operation in the transversal direction with a low index guiding layer (usually SiO2) and a size of the structure that allows good compatibility with single mode optical fibers which can be used in various daily life applications The most important characteristic of ARROW waveguides is that they can operate in single mode, even for core dimensions and rib parameters of a few micrometers, together with a low index cladding having a refractive index lower than the core cladding leading to various experimental advantages. .The basic characteristics of these waveguides are: Low losses for the basic mode implying maximum light confinement. High tolerance for the design of the refractive index and thickness of the cladding layers. Since low loss operation of the waveguide relies on properly phased reflections from all the cladding interfaces, one might conclude that the device only works over a narrow band of wavelengths . Strict fabrication tolerances.

Furthermore, ARROW structures have been studied because they present selective losses depending on the wavelength and on the polarization of the light, and accordingly, can be used as integrated wavelength filters and polarizers .Therefore, conditions for leakage of guided light are achieved by a suitable design of the structure, where ARROW operation is controlled by a proper use of refractive index and thickness of the antiresonant layers.
Another advantage is that substrate cladding can be made reasonably thin because of the shielding effect of the antiresonant structure, avoiding the use of thick substrate claddings that require long deposition processes.

Lateral confinement of the slab antiresonant modes is achieved in the waveguides that are subject of this work by means of a rib structure. As much as the ARROW structure provides the conditions for power leakage from the waveguide, rib parameters such as rib depth and waveguide width determine the guiding conditions of the light in the ARROW. Rib parameters also have strong influence in the performance of rib-ARROW’S, and if they are not properly controlled, some problems such as the loss of the fundamental mode for narrow waveguides or too high losses may arise for waveguides with low rib heights. ARROW's are leaky structures which can be solved analytically, leading to modes with complex propagation constants. The imaginary part of the propagation constant accounts for radiation losses through the substrate which are dependent on the wavelength or polarization of the light and are determined by the thickness and refractive index of the ARROW layers. ARROWs can be realized as rib waveguides or slab waveguides (1D confinement). The ARROWs are practically formed by a low index layer, embedded between higher index layers. Note that the refractive indices of these ARROWs are reversed, when comparing to usual waveguides. ARROW structures are often used for guiding light in liquids for optofluidic applications, particularly in microfluidic systems. This is due to the difficulty of finding suitable optical cladding materials, with a lower refractive index than the liquid, which would be required to form a conventional waveguide structure.

4. Waveguide Modes Waveguide modesare characteristic of a particularwaveguide structure. A waveguide mode is a transverse field patternwhoseamplitude and polarization profilesremain constant alongthe longitudinal z coordinate. Therefore, the electric and magnetic fields of a mode can be written as follows :

where v is the mode index, Ev (x, y) and Hv (x, y) are the mode field profiles, and βυ is the propagation constantof the mode. For a waveguide of two-dimensional transverse optical confinement, there are two degrees of freedom in the transverse xy plane, and the mode index υ consists of two parameters for characterizing the variations of the mode fields in these two transverse dimensions. For ex. v represents two mode numbers, v = mn with integral m and n, for discrete guided modes. As the wave is reflected back and forth between the two interfaces, it interferes with itself. A guided mode can exist only when a transverse resonance condition is satisfied that is the repeatedly reflected wave has constructive interference with itself. Modes can be classified as : Transverse Electric or Magnetic (TEM) Transverse Electric (TE) Transverse Magnetic (TM) Transverse electric (TE) fields are those whose electric field vector lies entirely in the x y plane that is transverse to the direction of net travel (the z direction). A TE wave has EZ=0 and HZ ≠ 0. Cut off frequencies for TE Modes :

Transverse electric (TM) fields are those the electric field is no longer purely transverse. It has a component along the z direction. However, the magnetic field points in the y direction for this type of mode is entirely transverse (i.e. Hz = 0).

Cut off frequencies for TM Modes:

The Transverse Electric and Magnetic (TEM) Mode are characterized by EZ =0 and HZ = 0. In order for this to occur fc= 0. In other words there is no cut off frequency for waves that support TEM modes.

5 6 Important Characteristics of the Waveguide

   Effective Index

The effective refractive index is a key parameter in guided propagation, just as the refractive index is in unguided wave travel.The effective refractive index changes with the wavelength(i.e. dispersion) in a way related to that the bulk refractive index does. We can define the waveguide Phase velocity vpas vp = ω / β We now define an effective refractive index neffas the free-space velocity divided by the waveguide phase velocity. neff = c / vp neff = cβ / ω = β / k Waveguide effective index: neff = n1sin θ For waveguiding at n1-n2 interface, we see that n2 ≤ neff ≤ n1. At θ = 90o, neff = n1implies a ray traveling parallel to the slab (core) has an effective index that depends on the guiding medium alone. At θ = θc,neff = n2implies the effective index for critical-angle rays depends only on the outer material n2. The effective wavelength as measured in the waveguide is: λz = λ/neff

Gain in a  Optical Waveguide

Gain in an optical waveguide is generally defined as : G = Pout/Pin where Poutis the signal output power that comes out of the waveguide and Pin is the input power coupled into the waveguide.

Since g(ω) depends on the incident optical power when P ≈ PS, Gain will start to decrease with an increase in optical power P. Therefore ,we cannot increase the optical power of the signal beyond the saturation level as it will not lead to constant increase in gain.

Where go is the peak gain, ω is the optical frequency of the incident signal, ωo is the transition frequency, P is the optical power of the incident signal and Psis thesaturation power. To achieve maximum Gain , we always assume that the incident frequency is tuned for peak gain ω= ω0 ). Losses in an Optical Waveguide There are 3 major types of losses in an optical waveguide : Scattering Losses: These are mainly caused by the surface roughness of the sidewalls in the waveguide.Sidewall roughness is mainly generated during etching process. This type of propagation loss is high for a small-dimension waveguide.

The scattering loss is further divided into :

Volume Scattering loss: This type of propagation loss is caused by imperfections. These imperfections can be due to design flaw or due to contamination of the structure due to doping. These are negligible as compared to surface scattering losses.

Surface Scattering loss : These type of propagation losses are dominant in optical waveguides. These are created by the roughness or the irregular nature of the waveguide surface. Absorption Losses:

These type of propagation losses occur when photons are incident on the waveguide surface . When light is incident on the waveguide surface, light energy (in the form of photons) is absorbed by the surface and only some of it passes through. The energy that is absorbed by the surface is lost as it is used to excite electrons from the valence band into the conduction band and thus because of these absorption losses we get less energy at the output as compared to the input .

Radiation Losses :These types of losses only occur when the waveguides are bent. They are absent in linear waveguides. When we design a waveguide which has bends and curves then these types of losses are significant.

6. Dispersion in an Optical Waveguide :

This phenomenon is basically described as the broadening of the pulse during its propagation .What it does is it limits the rate of the information that is transferred per pulse.Dispersion is also classified into various categories : Modal dispersion:Modal dispersion is a distortion mechanism occurring in multimode fibers and other waveguides, in which the signal is spread in time because the propagation velocity of the optical signal is not the same for all modes.Modal dispersion limits the bandwidth of multimode fibers. Modal dispersion should not be confused with chromatic dispersion, a distortion that results due to the differences in propagation velocity of different wavelengths of light. Modal dispersion occurs even with an ideal, monochromatic light source. Material and Waveguide dispersion:Material dispersion can be a desirable or undesirable effect in optical applications. Most often, chromatic dispersion refers to bulk material dispersion, that is, the change in refractive index with optical frequency. However, in a waveguide there is also the phenomenon of waveguide dispersion, in which case a wave's phase velocity in a structure depends on its frequency simply due to the structure's geometry. More generally, "waveguide" dispersion can occur for waves propagating through any inhomogeneous structure (e.g., a photonic crystal), whether or not the waves are confined to some region. In a waveguide, both types of dispersion will generally be present, although they are not strictly additive. For example, in fiber optics the material and waveguide dispersion can effectively cancel each other out to produce a Zero-dispersion wavelength, important for fast Fiber-optic communication.

Parrytomar (talk)07:05, 14 May 2017

Component-Based Software Engineering for Integration and Communication

Pradeep Tomar and Gurjit Kaur

The history of software development began in UK in 1948 (Ezran et al., 2002). In that year, the Manchester “Baby” was the first machine to demonstrate the execution of stored-program instructions. Since that time, there has been a continuous stream of innovations that have pushed forward the frontiers of techniques to improve software development processes. From subroutines in the 1960s through to modules in the 1970s, objects in the 1980s, and components in the 1990, software development has been a story of a continuously ascending spiral of increasing capability and economy battled by increasing complexity. This necessity is a consequence of software projects becoming more complex and uncontrollable, along with problems involving schedules, costs, and failures to meet the requirements defined by the customers. In the 1960’s Mcllroy (1969) introduced the idea of software development based on components. Currently, the notion of a software component may include, apart from plain code, other artifacts of the software development process as well - for instance, requirements, design documents, test scripts, and user manuals (Freeman, 1983; Krueger, 1992 and Lim, 1998). The whole idea of Component-Based Software Development (CBSD) thus pursues an efficient and effective reuse process; i.e., the activities of managing, producing, brokering, and consuming all kinds of software components (Lim, 1998 and STARS, 1993), beyond any single software development process. Software industries are striving for new techniques and approaches that could improve software developer productivity, reduce time to market, deliver excellent performance and produce systems that are flexible, scalable, secure, and robust. Only software components can meet these demands. Component-Based Software Engineering (CBSE) has emerged, which has generated tremendous interest in software development community. The paradigm shift to software components and component reuse appears inevitable, necessitating drastic changes to current software development and business practices (Stritzinger, 1996). Reuse of pre-fabricated hardware building blocks has proven to be an extremely useful production in numerous industrial domains. Electric switches, controllers, motors, gears, pumps, etc. are usually not designed for a single application, but for many potential uses. Therefore, the desire to assemble or compose software products from pre-fabricated, reusable parts is also natural and rather old. The invention of Object-Oriented Programming (OOP) was an important step in the direction of greater reusability. The simplest and most often practiced reuse in OOP is reuse of concrete classes. This kind of reuse presupposes concrete implementations of well-defined, elaborated concepts. On other hand, CBSD is getting popular and is being considered as both efficient and effective approach to build large software applications and systems, potential risks within component-based practicing areas such as quality assessment, complexity estimation, performance prediction, configuration, and application management should not be taken lightly. In the existing literature there is lack of systematic work in identifying and assessing these risks. In particular, there is lack of a structuring framework that could be helpful for related CBSD stakeholders to measure these risks. The CBSD is close related to reuse. The idea about reusing pieces of software originates from early sixties when the term software crisis was mention first time. The basic idea is simple - when developing new systems use components that are already developed. When researchers and practitioners develop specific functions they need to improve other system and they develop it in that way any body can use this function in other software in the future.


Software are becoming increasingly complex and providing better functionality. Organization often uses component-based technologies instead of developing all the parts of the system from scratch to produce cost-effective systems. The motivation behind the use of components was initially to reduce the cost of development, but it later became more important to reduce the time to market, to meet rapidly emerging consumer demands. At present, the use of component is more often motivated by possible reductions in development costs. By using components it is possible to produce more functionality with the same investment of time and money. CBSE provides methods, models and guidelines for the developers of Component-Based System (CBS). In software engineering several underlying technologies have matured that permit building components and assembling applications from sets of those components with object-oriented and Component technology e.g. Common Object Request Broker Architecture (CORBA). Business and organizational context within which applications are developed, deployed, and maintained has changed. There is an increasing need to communicate with legacy systems, as well as constantly updating current systems. This need for new functionality in current applications requires technology that will allow easy additions. There hasn’t been any standards adopted by the industry as a whole, and without standards, it has been difficult for developers to be motivated to design with reusability in mind. Increasingly, complex systems now need to be built with shorter amounts of time available. Software engineers are faced with a growing demand for complex and powerful software systems, where new products have to be developed more rapidly and product cycles seem to decrease, at times, to almost nothing. Some advances in software engineering have contributed to increased productivity, such as OOP, Component-Based Development (CBD), domain engineering, and software product lines, among others. These advances are known ways to achieve software reuse. Some studies into reuse have shown that 40% to 60% of code is reusable from one application to another, 60% of design and code are reusable in business applications, 75% of program functions are common to more than one program, and only 15% of the code found in most systems is unique and new to a specific application (Ezran et al., 2002). According to Mili et al., 1995 and 1998 rates of actual and potential reuse range from 15% to 85%. With the maximization of the reuse of tested, certified and organized assets, organizations can obtain improvements in cost, time and quality.

Building software from existing software components is not a fresh idea but development of software from component is getting more and more popularity in the software organization, due to the short development life cycle and intense marketing pressure of software. CBSE focuses on the process for the rapid assembly of system by reusing software component with high quality and it is based on the aspects given by Bachmann et al., 2000. First aspect is, the property of software component must be predictable because researchers and practitioners break complicated problem into small pieces and integrate the solution of all the parts of the problem which are the fundamental concept of most mature engineering disciplines, and it also applies to CBSE. So to build a software component with high quality, the property of the software component must be predictable, just like that there is no way of building a computer with high quality without knowing anything about the parts. Second aspect is, the software component should be ready for rapid assembly in the present software market, and time is a critical factor for success. According to this study rapid assembly has always been one of the advantages of CBSE due to the new CBS technology and reuse-oriented development environment. Reuse-oriented development environment is claimed to be the healthy direction for software technology development. Reuse-oriented development environment can increase software quality and reliability because the reusable software component has been used and tested before. Software reuse can increase software productivity and shorten the software release time to market, reduce maintenance cost (Krueger, 1992).

The work done in this study is to explore a practical model to develop a CBS by using two different approach viz. development of reusable and testable software component and development with reuse of software component. CBSD is receiving increasing amounts of interest in the software engineering community. The aim of CBSE is to create a collection of reusable and testable components that can be used for CBD. CBD becomes the selection, adaptation and composition of components rather than implementing the application from scratch. But what are the main causes to use the software component for software development in comparison to traditional software approaches? OR why researchers and practitioners cannot be satisfied with the current state of software development? The answer is, unfortunately, a buzzword in software industry - the software crisis. This study presents problems of traditional software development and software crisis.

Software Crisis The development of component and CBS is a young discipline. Peoples are surrounded by technical equipment that contains different types of software. Software has given more advanced machines, but at the same time has made more dependent on reliable software. For a number of years researchers and practitioners have wrestled with the so-called "software crisis" which Professor Boehm used in his work (Boehm, 1976) at first time, crisis means here, that the quality of software is generally unacceptably low and that deadlines and cost limits are not being met. Software is produced in a very individual manner. Software is always produced from scratch and when it is delivered it is often poorly documented, and much of the knowledge surrounding it is still sitting in the programmers head. Researchers and practitioners face major problems of software development - poor software quality, and low productivity, high cost and time.

Poor Quality Software and Low Productivity The rapid development of software with the rapid changes in the requirements increases the development time and complexity of software systems. This problem is worsened by the fact that most of the programming languages currently are used for software development, does not promote working in the desired way. The traditional way of driving program development led to many problems. Those who order to develop software often find it difficult to explain to programmers what they want. As lead times become shorter and shorter, documentation and testing suffer. When at last the product is delivered, sometimes very late, it is often of poor quality in terms of lack of documentation and testing techniques. Documentation is the main process during the development of any software or components. Good documentation of any software and components support reusability.

High Software Cost and Time The economic aspect of the software crisis is aggravated by the fact that the relative cost of software compared with the cost of hardware has increased greatly by a factor of 10:1 at present. A significant reason for this is that whereas hardware prices have fallen dramatically, software development costs and time cannot match this reduction because of its labor-intensive nature. Software support costs, which are made up primarily of labor costs, account for about 75% of the total software costs. Due to the non availability of good programmer, developer and regular changes in the software requirements during software development affects the software deadlines and increase the time limit to deliver the software.

Facing the software crisis, people have made great improvement in last 30 years. Reusability and testing are maybe the two effective approaches to response the software crisis. Reuse of software components is becoming more and more important in a variety of aspects of software engineering. Recognition of the fact that many software systems contain many similar or even identical components that are developed from scratch over and over again has led to efforts to reuse existing components. Structuring a system into largely independent components has several advantages. It is easy to distribute the components among various engineers to allow parallel development. Maintenance is easier when clean interfaces have been designed for the components, because changes can be made locally without having unknown effects on the whole system. Software reuse and testing have a major influence on the development of software systems if components’ interrelations are clearly documented, tested and kept to a minimum, it becomes easier to exchange components and incorporate them into new system. The main things that should be solved by component reuse and testing are: • Components that can be easy to search and integrate. • Reusability and testability of existing code is easy and fast. • A specific problem only has to be solved once, and the solution should be put into components and can be used by anyone. • Easier modification and maintenance of existing components. • Components are modular and it is quite easy to use.

CBSE AND CBSE PROCESSES Today component engineering is gaining substantial interest in the software engineering community. Although a lot of research effort has been devoted to analysis methods and design strategies of CBS, a few papers address the CBSE and CBSE process of CBS. This chapter identifies and classifies the design, development and analysis issues of software components and CBS. This study proposes a model and component testing process. Finally, it shares our observations and insights on development and testing of CBS. CBSE is a process which helps in design and development software by using reusable and testable software components. Clements (Clements et al., 2003) describes CBSE as embodying “the ‘buy, don’t build’ philosophy”. He also says that “in the same way that early subroutines liberated the software developer from thinking about details shifts the emphasis from programming to composing software systems”. The goal of CBSE is to increase the productivity, quality, and time to market in software development. Important paradigm shift of CBSE is to build software systems from standard components rather than from scratch. Over the past decade, many software developers, computer scientists and researchers have attempted to improve software development practices by improving design techniques, testing techniques, developing more expressive notations for capturing a system’s intended functionality, and encouraging reuse of pre-developed system pieces rather than building from scratch. Each approach has had some notable success in improving the quality, flexibility, and maintainability of application systems, helping many organizations develop complex, applications deployed on a wide range of platforms. Despite this success, any software industries developing, deploying, and maintaining large-scale software-intensive systems still faces tremendous problems, especially when it comes to testing and updating the systems. Unless carefully designed, it can be very expensive in term of time as well as cost, to add further functionality to a system effectively, and then to test whether the addition has been successful. Furthermore, in recent years, the requirements, tactics, and expectations of application developers have changed significantly. They are more aware of the need to write reusable code-even though they may not always employ this practice. CBSE should, in theory, allow software to be more easily assembled, and less costly to build. Although this cannot be guaranteed, the limited experience of adopting this strategy has shown it to be true. The software systems built using CBSE are not only simple way but usually turn out to be more robust, adaptable and updateable.

Conventional software reuse and CBSE although object-oriented technologies have promoted software reuse, there is a big gap between the whole systems and classes. To fill the gap, many interesting ideas have emerged in object-oriented software reuse for last several years. CBSE takes different approaches from the conventional software reuse in the following manner (Aoyama, 1998): • Component should be able to plug and play with other component so that component can be composed at run-time without compilation. • Component should separate the interface from the implementation and hide the implementation. • Component is designed on a pre-defined architecture so that they can interoperate with other component. • Component interface should be standardized so that they can be developed by multiple developers and widely reused across the corporations. • Component can be acquired and improved by getting feedback from users and end users. CBSE is in many ways similar to conventional or Object-Oriented Software Engineering (OOSE). A software team establishes requirements for the system to be built using conventional requirements elicitation techniques. Rather than a more detailed design task, the team now examines the requirements to determine what subset is directly amenable to composition, rather than construction. The CBSE process identifies not only candidate components but also qualifies each components’ interface, adapts components to remove architectural mismatches, assembles components into selected architectural style, and updates components as requirements for the system change. Two processes occur in parallel during the CBSE process. These are domain engineering and CBD processes.

Domain Engineering Process The main aim of domain engineering process is to identify, construct, catalogue, and disseminate a set of software components that have applicability to existing and future software in a particular application domain. The goal is to establish a mechanism by which software engineers can share these components in order to reuse them in future system. Examples of application domains are - railway management systems, defence control systems, financial management systems, and air traffic control system. Domain engineering begins by identifying the domain to be analysed. This is achieved by examining existing applications and by consulting experts. A domain model is then realised by identifying operations and relationships that recur across the domain and therefore being candidates for reuse. This study guides the software engineer to identify and categorise components, which will be subsequently implemented. One particular approach to domain engineering is structural modeling. This is a pattern-based approach that works under the assumption that every application domain has repeating patterns. These patterns may be in function, data, or behaviour that has reuse potential. This is similar to the pattern-based approach in OOP, where a particular style of coding is reapplied in different contexts.

Component-Based Development Process There are three stages in this process. These are qualification, adaptation and composition. Component qualification examines reusable components. These are identified by characteristics in their interfaces, i.e. the services provided, and the means by which consumers access these services. This does not always provide the whole picture of whether a component will fit the requirements and the architectural style. This ensures a candidate component will perform the function required, and whether it is compatible or adaptable to the architectural style of the system. The three important characteristics looked at are performance, reliability and usability. Component adaptation is required because very rarely will components integrate immediately with the system. Depending on the component type different strategies are used for adaptation. The most common strategies are - White Box Wrapping (WBW), Grey Box Wrapping (GBW), and Black Box Wrapping (BBW). Component composition integrated the components into a working system. This accomplished by way of an infrastructure which is established to bind the components into an operational system. This infrastructure is usually a library of specialised components itself. It provides a model for the coordination of components and specific services that enable components to coordinate with one another and perform common tasks.

CBSD AND CBSD PROCESSES The CBSD consists of two separate but related processes. The first is concerned with the analysis of application domains and the development of domain i.e. development for reuse. The second process is concerned with assembling software systems from prefabricated component i.e. development with reuse (Voas, 1998; Vigder et al., 1996; Voas, 1999; Harrold et al., 1999).

Development for Reuse The component development process is concerned with developing generic and domain-specific components. To achieve successful software reuse, commonalities of related systems must be discovered and represented in a form that can be exploited in developing similar systems. Domain commonalities are used to develop models or software components that can be used to develop system in the domain. Once reusable components are created, they can be made available within organizations or on the open market as commercial components. If an organization wants to achieve the benefits of sharing and reusing components a successful reuse strategy must include the following - (Kotonya et al., 2003) A reuse method that is consistently applied by component and application developers. The method that supports reuse contains the following steps. • Define and classify client requirement with the help of application, interface, and support. • Search repository for reusable and testable components that support the requirements. • Analyse candidate component to ensure there is an acceptable match between them and the requirements. • Develop CBS on standard component model. • Incorporate the reuse methodology into system development life cycle.

Development with Reuse Development with reuse develops some of the early ideas on CBD (Brown et al., 1998) to provide a scalable process with a clear separation of concerns. The negotiation phase attempts to find an acceptable trade-off amongst multiple competing development attributed. The planning phase sets out a justification, objectives, and strategies. The development phase implements the agenda set out in the planning phase. Essential issues to be addressed in phase include. • The process of defining requirement for a CBS. • The process of partitioning software requirements into logical “components” or “sub-systems” (Sommerville, 2001). • The process of replacing abstract design components with “concrete” off-the-shelf components. • The verification process is intended to ensure that there is an acceptable match between the software component used to build the software and the software being built.

Component raises the level of abstraction to that which can be easily used by a domain expert who is not necessarily an expert programmer. They allow software vendors to build visual development environments in which the concept of plugging together these “software” forms the basis of any new development. The writing of actual code is kept to a minimum - scripting can be used to glue together components or to tailor existing behavior. A typical development effort using components would be importing the components of interest and customizing each one it without explicit coding and finally wiring together the components to form an application. The advantages are immediately obvious ( • Increased productivity gained by reuse of design and implementation; • Increased reliability by using well tested code; • Lower maintenance costs because of a smaller code base; • Minimizes effects of change since Black Box Programming (BBP) tends to rely on interfaces as compared to explicit programming; • Components provide a well encapsulated mechanism to package, distribute and reuse software.

COMPONENT-BASED SYSTEMS Component-Based Software Engineering (CBSE) has two different perspectives. One is based on the relationship to fundamental principles of system design. In this view, the key features of CBSE could have developed simply as inevitable consequences of applying general “system thinking” to software systems. Another is based on observing the history of two related branches of software engineering research. Human-engineered physical devices such as cars and appliances and computers, as well as naturally occurring physical objects such as forests, galaxies and nerve bundles, often are called “system.” Both this word and the thinking that underlies it are crucial features of modern software engineering as well. Software systems, however, are purely symbolic entities that have meaning and intellectual interest even without a physical manifestation. CBS in physical engineering and in software engineering therefore have many common features, yet differ in important and sometimes subtle ways. Examples of CBS are Microkernels, Client/Server Architectures. CBS is any part of the CBSE in which someone wants to treat as an indivisible unit and develop system with high quality (Weide, 2001). CBSE has received considerable attention among software vendors and information technology organizations. Software component concepts have been evolving very rapidly and a marketplace for software components has emerged. Components have changed the way that programmers develop major applications (Bill and George, 2000; Maurer, 2000). Developing and using various software components as building blocks can significantly enhance CBSD and use that is why both the academic and commercial sectors have shown tremendous interest in CBSD indeed, much effort has been devoted to define and describe the terms and concepts involved. Interpretation of component and CBD varies from vendor to vendor. Some package vendors use the term component to refer to what might be regarded as a sub-system whereas others use the term component as a synonym for the distributed object. Few others define component as reusable deliverable whereas Unified Modeling Language (UML) defines a component as a physical executable, irrespective or its attributes (Grover and Gill, 2003).

In order to make a system easy to develop and maintain, the component programming technique is often used. A component is an instance of some class of components. Components of the same class may be parameterized differently and therefore behave in different ways. In component-wise programming, system functionalities are decomposed into small components with each implementing a small functionality. Components can be glued together by some binding mechanism to realize the system functionality. There are several advantages to component-based design (Xiaoming, 2001). Component realizes a small functionality, which is easy to implement and debug. Managing the system as a set of components is easier. Many practical systems are constructed from components. adaptive systems can be easily made out of such reconfigurable CBS. Such system can adapt their functional and non-functional behaviors in four ways. •Some components being used in system composition can adjust their behaviors by adjusting their parameters. Self adjusting components have been used for a long time. In multimedia systems, a server can adjust its playback rate according to receiver’s consuming rate in order to achieve better playback quality (Cowan et al., 1995). •People want to build systems that can solve problems under as many conditions as possible. It is hard to come up with a single algorithm that is optimal in every possible situation. •A component can be added to the system to introduce new functionality. Similarly, a component can be deleted to save system resources. In many systems, what types of components are required is best decided at runtime. For example, in a distributed conference application, if all the participants are behind the same firewall, they may trust each other. Otherwise, an encryption mechanism is needed for security; message can be processed differently depending upon whether they are going over an insecure link or not (Van et al., 1998). •Components interact with each other. A distributed system or application works best if the distributions of its components are mapped well onto its underlying network or system.

COMPONENT-BASED MODELS AND TECHNOLOGIES In CBSE there are three major component models. These three component models are COM, JavaBeans, and CORBA and all of them have different levels of service for the application developers. These models in component software are the one which are proposed by the Object Management Group (OMG), Microsoft, and Sun. They are - Common Object Request Broker Architecture (CORBA Component Model (CCM)) (OMG, 1999) CORBA is release of CORBA standards, and it proposes the new CORBA Component Model (CCM). CCM is an extension of Enterprise Java Beans (EJB). CORBA is a container-based specification, that is, every component instance is inside a container. Component Object Model (COM+) (Eddon, 1999), COM+ is an extension of COM, the foundation of Microsoft’s component platform. COM+ 2.0 has been used for the .NET Framework. COM+ separates declarative attributes about infrastructure services from the code of components. EJB (Sun Microsystems, 2001) is a Java’s component model based on container. The container will implement the runtime environment for the enterprise bean, which includes security, concurrency, life cycle management, transaction and other services. This study discusses here following component-based models and technologies e.g. COM, EJB and CORBA according to the (Crnkovic et al., 2001).

Parrytomar (talk)22:53, 28 April 2017

Internet of Things Background and Architecture

Pradeep Tomar and Gurjit Kaur

IoT is as of now the most prevalent information and communication innovation for smart cities. loT is an idea that imagines all objects like advanced cells, tablets, computerized cameras, drone, sensors, and so forth. When every one of these objects is associated with each other, they empower increasingly keen procedures and administrations that bolster our fundamental needs, economies, environment and wellbeing. Such huge number of objects associated with web gives numerous sorts of administrations and creates gigantic measure of information and data. IoT furnishes us with loads of sensor information. Constant sensor information investigation and basic leadership are frequently done physically, however, to make it versatile, it is ideally computerized. Computerized reasoning gives us the system and instruments to go past unimportant continuous choice and robotization utilizes instances of IoT. As per Sweeney,2005, the idea of the IoT originates from Massachusetts Institute of Technology, which is devoted to making the IoT by using Radio Frequency and Sensor Networks. As per Uckelmann et. al., 2011 IoT is an establishment for associating things, sensors, and other brilliant advances. Data and communication technologies can get data from anyplace by growing altogether new systems, which shapes the IoT. Radio-Frequency Identification (RFID) and related recognizable proof advances will be the foundation of the up and coming IoT. IoT advancements is possible because of 4G Long Term Evolution (LTE), wi-fi, ZigBee and Bluetooth Low Energy (BLE) technologies which are being utilized for most recent application to make a city smart.

Architecture of IoT Layers The architecture of IoT layers is dynamic in nature. The architecture of IoT has four layers i.e. Sensors connectivity and network layer, gateway, and network layer, management service layer and after that comes the application layer. This architecture provides communication stack for communication through IoT.

Sensors Connectivity and Network Layer This layer has a group of smart devices and sensors and grouped according to their purpose and data types such as environmental, magnetic, obstacle and surveillance sensors etc. WSN formation is made and the information is delivered to a targeted location for further processing. Real-time information is collected and processed to get a result or generate some output. The sensor network is then made to communicate with each other via sensor gateways. They can be connected using the Local Area Network (LAN) (Ethernet or WiFi), Personal Area Network (PAN) (6LoWPAN, Zigbee, Bluetooth).

Gateway and Network Layer The capacity of this layer is to bolster huge volume of the database produced by sensor connectivity through gateway network layer. It requires a robust and reliable performance, regarding private and public network models. Network models are designed to support the communication, Quality of Service (QoS) necessities for inactivity, adaptability, transmission capacity, security while accomplishing large amounts of vitality effectiveness.

IoT sensors are amassed with different sorts of conventions and heterogeneous systems utilizing distinctive innovations. IoT systems should be versatile to productively serve an extensive variety of administrations and applications over vast scale systems.

Management Service Layer Information analytics, security control, process modeling and device control are done by the management service layer. It is also responsible for an operational support system, security, business rule management, business process management. It has to provide service analytics platform such as statistical analytics, data mining, text mining, predictive analytics etc. The data management manages information flow and it is of two types: Periodic and Aperiodic. In periodic data management, IoT sensor data requires filtering because the data is collected periodically and some data may not be needed so this data needs to be filtered out. In aperiodic data management, the data is an event triggered IoT sensor data which may require immediate delivery and response e.g. medical emergency sensor data.

Application Layer This layer at the highest point of the stack is incharge of conveyance of different applications to various clients in IoT. This application layer serves to the client of assembling, coordinations, retail, environment, open security, human services, nourishment, medication and so forth. Different applications from industry divisions can utilize IoT for administration improvement.

Parrytomar (talk)07:10, 24 February 2017