JCT - Volume 4 Issue 11 (November 2015)


Sr/No/ Title Pdf
1 Experimental Study on Test Automation Using Selenium WebDriver
Aishwarya Vatsa

Abstract- Requirements of client(s) enforces software analysts to write test suites comprising of test cases. One of the most important issues for any SQA1 engineer is to automate the testing procedure. Nevertheless, test automation has taken testing a step forward. For the purpose of test automation, numerous software testing tools are available which reduces human involvement in the testing procedure. In this paper, an experiment is performed with the assistance of Selenium WebDriver. Here, client’s requirements are formulated into test scripts. These scripts test various elements present on User Interface of a website. This paper explores most of the tools and packages required for writing test scripts.

2 Effect of Process Parameters on Weld bead geometry of Narrow V-groove Butt joint in Pulsed Gas Metal Arc Welding
A. Pavan Kumar, Sanjay Kumar, Z. Jitendra

Abstract-Weld bead geometry is influenced by a number of welding process parameters that affect the product quality of the joint. In this experimental study, an effort is made to find the effect of process parameters on bead geometry of narrow vgroove butt joint in pulsed gas metal arc welding. Three input process parameters such as wire feed rate, welding speed and groove angle, each of three levels each are considered. The experiments are conducted on narrow v-groove butt joint of 5083-h111 aluminium alloy with groove angles of 20˚, 30˚ and 40˚ using full factorial design of experiments. The mathematical model for side penetration and dilution are developed using linear regression analysis. The mean analysis for side penetration and dilution is done for all three input levels. It is observed that wire feed rate has maximum effect on side penetration and dilution whereas welding speed has intermediate effect on the side penetration and dilution.

3 Implementation Of Botnet Threat Detection In P2p
Mohini N.Umale, A.B. Deshmukh, M.D.Tambakhe

Abstract-Botnets square measure nothing however the malicious codes like viruses that square measure used for assaultive the computers. These square measure acts as threats and square measure terribly harmful. Owing to distributed nature of botnets, it's onerous to notice them in peer-to-peer networks. The decentralized nature of Peer-to-Peer (P2P) botnets makes them troublesome to notice. Their distributed nature conjointly exhibits resilience against take-down makes an attempt. Moreover, smarter bots square measure sneaks in their communication patterns, and avoid the quality discovery techniques that explore for abnormal network or communication behavior. Therefore we have a tendency to need the smarter technique to notice such threats. The automated detection of botnet traffic is of high importance for service suppliers and huge field network observance. During this work we have a tendency to square measure attending to implement new machine algorithmic program to optimize the performance.br />
4 An Interesting Diophantine Problem on Triples-II
M.A.Gopalan, S.Vidhyalakshmi, E.Premalatha, R.Sridevi

Abstract- We search for three non-zero distinct integers a, b, c such that, if a non-zero integer is added to the sum of any pair of them as well as to their sum, the results are all squares.

5 Effect of Different UML Diagrams to Evaluate the Size Metric for Different Software Projects
Preety Verma Dhaka, Dr.Amita Sharma

Abstract- As we know that in Software Engineering, measuring the software is an important activity. For measuring the software appropriate metrics are needed. Using software metrics we are able to attain the various qualitative and quantitative aspects of software. Software Metrics are a unit of measurement to measure the software in terms of quality, size, efforts, efficiency, reliability, performance etc. Measures of specific attributes of the process, project and product are used to compute software metrics.

6 Moving object detection and methods: A Review
N.Palanivel, M.VasanthaPriya, R.Heera

Abstract- A method of identifying fuzzy moving targets in extremely untidy time- variable image sequences is conferred. These targets are sufficiently faint that they cannot be detected in individual image frames; therefore a track-before-detect technique is needed. Moving targets are detected employing a wavelet-based feature extraction rule, within which the brilliant regions within the composite image are transformed into bar options. Background rejection is utilized to get rid of sensing element noise and alternative changes in brightness not related to movement, for instance changes in illumination. Initial litter rejection relies on a given threshold exceed rate calculable from the multi resolution background statistics of feature intensity. Any false track rejection involves a coordinate system analysis of image frame variations within the neighborhood of every bright region, wherever real movement seems as connected diagonal swaths. This indicates that methods to moving target detection are extremely effective once backgrounds are highly correlative in time.

7 Designing Home Parameters Monitoring and Controlling System Based on IOT
B.Anurag Reddy, B.Dhananjaya, S.M.Ganesh

Abstract- The advancements in Wireless Sensor Networks (WSN) & Internet technologies, a new trend in era of ubiquity is being realized. The Enormous increase in users of Internet & modifications on internetworking technologies enable networking of everyday objects. The “Internet of Things (IoT)” is about physical items talking each other, machine to machine communications and person to computer communications will extended to “things”. The Key technologies that drive future IoT related to Smart sensor technologies including Nanotechnology, WSN and Miniaturization .The integrated network architecture and the interconnecting mechanisms for reliable measurements of parameters by smart sensors and transmission of data via internet .The proposed system which can use to update the values or parameters in home by internet via different communication protocols. The proposed system is designed as three nodes, one is sensor node, second one is coordinator node which will be interfaced with internet through PC and the third one is Supervision node from where we can monitoring and control parameters . The two nodes can communicate by Zigbee wireless protocol. The Coordinator node will update data in to internet via PC. Then we can view status of parameters and can control it via internet.

8 HBGP: A Hybrid Border Gateway Protocol for Secure Resource Allocation in an Overlay Networks
Maleswari. Nakka, Vakacharla Durgaprasadarao

Abstract- Routing is a process of sending dedicated packets from valid source to the valid destination through a device called as router. This routing periodically changes if there was any node failure or link failure occurred intermediately during data transmission. Now a day’s overlay routing have achieved more attention of various network users for sending the data through network, this is mainly because there is no need to change the standards of current routing scheme if there was any delay or loss during the data transfer.Eventhough this is very attractive scheme that was opted by various network users, deployment of such a overlay network requires a huge overlay infrastructure. Inorder to build this network it requires huge maintenance cost for usage as well as deployment. This mainly gives rise for following optimization problem like: First we need to optimize the network by finding very minimal set of overlay nodes in order to satisfy the routing. For this scenario we initially examine all the practical aspects over real time networks. Firstly we mainly concentrate on NORRA algorithm which requires very less relay nodes not greater than 100 in order to enable the routing over shortest path from a single source system to multiple autonomous systems. In this we can able to reduce a maximum path length which is almost less than 50 % on overall routing. As an extension for this paper we have implemented the same concept on network simulator where we can progressively show the performance and cost effectiveness of overlay relay networks. Also we have used a primitive encryption Algorithm for encrypting the data request and send the data from valid source to destination. Once the data have been received in the Destination system the data should be identified by its node IP address then only the data will be decrypted in the receivers node, if not data will be in encrypted manner only. Our simulation results clearly tells that this was the proposed system on overlay networks which was not yet implemented till now where we can reduce a lot of delay, packets loss and reduce maintenance cost.

9 A Novel Information Sharing System Designed by Humans Using Mobile Networks
T Madhusudana Rao, H.Appala Naidu

Abstract- Now a day’s mobiles devices plays a very vital role in each and every human life, as there was increase in population day to day the usage of mobile devices also increased to a high extent. As the mobile users access the internet for processing their activities through mobiles there was a lot of problems that occur during their communication. So the current mobile devices mainly depend on infrastructure in which it has been opted or choosen.This type of architecture is inefficient in many situations and it also requires a lot of inter device communication .So in this paper we have implemented a novel network which was created by human that enable the information sharing between mobile devices through direct inter device communication also known as NHUNET.In this we have design a BSUB network for inter-driven information sharing between HUNET nodes. In this paper as an extension we have implemented the architecture as two categories of nodes like producer and consumer. Here the producer is nothing but service generators and consumers are nothing but those who wait for services. In this paper whenever if any consumer changes his working nature like active to inactive, he will be immediately identified by our NHUNET and correspondingly it will intimate to each and every corresponding users as problem nodes and alternately it chooses a random of 2 nodes as mediator nodes called as Agent Nodes. We have conducted various experiments on this NHUNET and finally came to a conclusion that this is the first time to implement such a NHUNET and it is very efficient and useful for almost all mobile users for data communication.

10 Analysis of a Unique Multivariate Correlation Algorithm (UMCA) on Wireless Sensor Networks in order to identify the DOS Attacks
Tatineni Bhagyasri, CH.Sunil

Abstract- Now a day’s security plays a very important role in each and every domain like medical, schools, shopping, financial banks, banking sector, insurance and so on. As the security plays a very important role hackers also try to hack the sensible information in any of the hacking forms. As the data is transferred mainly through interconnected systems like web servers, local data base servers or overlay servers like those which are stored on remote hardware not on the local servers, there were a lot of security threads from these network attackers. One among the best hacking technique is Denail of Service Attack (DOS) which comes under nonphysical attack. So in this paper we have implemented a Unique Multivariate Correlation Algorithm (UMCA) which mainly used for analyzing the accurate network traffic by extracting the geometrical values between the intermediate nodes. As an extension we have also implemented the modes of delay in this paper like if a data is been effected by DOS Attacker the server can identify which type of request or response it has received. If the server receives the data within the time period which is specified by client during transmission it is treated as normal and identified as Normal Mode. If the same request or response is received more than the stipulated time, then it is treated as Danger Mode. This clearly tells for the client as well as the server whether the network has any DOS attack prone or free from attackers. By conducting various experiments on a group of systems which are connected on LAN, we finally came to a conclusion that this proposed UMCA algorithm is best for reducing DOS attacks compared with various classical approaches.

11 Monitoring of a Secure Cloud Storage by a Hybrid Cloud Service Inorder to Eliminate Information De-duplication
Pilla Satish Kumar, Kuna Venkata Kiran, Medara Rambabu

Abstract- Now a day’s cloud computing is one of the fascinating domain which is used by almost all MNC and IT companies. It is the practice of using a large number of systems connected all together for remote servers hosted on internet to store, access, retrive data from remote machines not from local machines. Generally the data which is stored in the cloud is stored on remote servers not on our local hardware, so this is the major cause why the data is to be stored securely. As the cloud server can be accessed by many people at a time who have access privilege to access or store the data, sometimes there is a chance of storing the same file for multiple times, which leads to data redundancy. As the cloud service is somewhat more cost compared with normal server, the maintenance of cloud data takes huge amount. If the duplicate data is allowed to store for multiple times in cloud server the cloud user need to pay some extra amount for the same duplicated data which is a major problem for cloud vendors. So in this paper we have implemented a new concept called as data de-duplication where this will not allow the duplicate files to be stored or reside on the cloud server. And as an extension we have also implemented a concept like encrypting the data while it is been stored on the cloud database. Additionally we have provided a new enhancement for the current cloud servers like Token generation, which acts as major security for the cloud users. The user who is registered for the first time will get a token to his registered mail id and that token should be substituted by user at each and every time of his login in order to have access on the cloud data what he stored in the database. This type of security is not available in the current cloud service providers either public or private providers.

12 Impact of Distributed Computing Based Information System for the Go Green Drive and Use of Waste Water to Reduce Pollution on the National Highways: A Case Study of Satara District
P. P. Patil

Abstract- Information sharing is the important aspect today in the era of information and knowledge discovery. Pollution is one of unavoidable component in the environment. The paper studies present problems on the roadsides of the National Highways for the area selected for the case study of Satara District in Maharashtra. Paper also mention the emerging concept of the societal development i..e Public Private Partnership modeling. We surveyed and found certain observations as in the view of increase of air pollution. We also suggested the need of Distributed Computing Model as one of the information assisting model which can be one of the remedial solutions. We listed the importance of Public, Private and their partnership based modeling aspects. Paper also lists advantages and limitations of the suggested model. We also suggested this mode of contributing some individual’s specially Municipal Corporation or local authority and voluntary participations from social clubs in the society.

13 Analysis of Cloud Data Storage and Retrieval in a Decentralized Manner
Telagathoti Anusha, Kamathamu Vasanth Kumar

Abstract- In current days, there was a lot of demand for the usage of cloud server as cloud computing is one of the practices of using a network of remote servers hosted on internet to store, access, and retrive data from remote machines not from local machines. The data which is stored on the cloud server is always stored on remote servers and it will be retrieved at the time of user need. As we know that the data which is stored on the cloud server is stored in the form of plain text it doesn’t have any security while retrieving that data either by the owner of that file or any other within the group or from outside the group. So In this paper we have implemented a new decentralized access control mechanism in order to store the data securely with an anonymous authentication. In this new proposed scheme whenever an cloud user want to store or retrieve the data to and from the cloud server it will authenticated for verifying the identity of that user. This process is done by trusted Third Party Authenticator in the cloud server. Once if the identity is verified and matches by the user’s original identity then only the data can be stored or retrieved. If u user fails to verify his identity he will be treated as unauthorized for data usage. And the user who was verified with the token id only can access the plain text after downloading otherwise the data will be in cipher mode only. By conducting various experiments on our proposed scheme, we finally came to a conclusion that this proposed scheme is very best in giving security for the cloud data during storage and retrieval time in a decentralized manner compared with several primitive methods.

14 A Novel Approach for evaluation of Pattern Classifiers like Biometric Authentication, Network Intrusion Detection, Spam Filtering under Attack Model
Maddi Sarika, Kuna Venkata Kiran, Medara Rambabu

Abstract- As we know that security plays a very prominent role in each and every aspect of human life. As security plays a major role, we need to have security primitives to safeguard the data. In order to provide the security for the data a lot of security primitives have been proposed in literature. Each and every security primitive has its own advantage in providing security for the sensitive data. There were a lot of adversary users who try to access information of secured users illegally through various hacking codes. In this paper we have analyzed and implemented a new security model by combing a set of three security primitives all together to give more security for the data which is to be stored on remote servers or on local machines. The combined set includes bio-metric authentication, spam filtering and intrusion detection system. In this proposed paper we also designed an additional concept like Monitoring the activities of users, like their login and logout details ,availability of network IP address and the details like their upload and download also. By conducting various experiments on our proposed integration model, our simulation results clearly tells that by using this new framework a user can be free from adversary attack during his data storage as well as data retrieving.

15 A Novel Identity Based Secure Data Audit in Public Cloud
Surla Atcharao, Medara Rambabu

Abstract- Cloud computing is one of the practices of using a network of remote servers hosted on internet to store, access, and retrive data from remote machines not from local machines. As the cloud is used mainly for storing the data on remote servers it has various services like PaaS, IaaS, SaaS and so on. Here each and every service has its own advantages and limitations due to its hardware and software usage. In this cloud the users who are placing the data in the cloud are known as Data Owners and the user who is accessing that file is known as Data users. As the data is been placed on remote system not on our local machines data integrity plays a very important role for both data owners and data users, these two users need to have audit for the cloud data without retrieving the entire data. The data audit should be commonly applied for both public and private cloud users. Till now there are many audit mechanisms in the cloud servers which were not at all achieved total data integrity as they have failed in some instances. So in this paper we have implemented a novel privacypreserving mechanism that supports public auditing on shared data stored in the cloud. In particular, we exploit a new ring signatures method to compute verification metadata needed to audit the correctness of shared data. With this new mechanism we can able to identify of the signer on each block in shared data is kept private from public data verifiers, who are able to efficiently verify shared data integrity without retrieving the original total file. As an extension we have also implemented a proof of concept like data auditing done by trusted third party user who will audit the data of each and every user based on the group. The TPA will conduct secure audit for each and every individual user data based on their group key which was assigned by the CSP.If any user substitutes wrong id during data audit he will be treated as un-authorized and file verification will be failed. By conducting various experiments on this proposed new approach we finally came to a conclusion that this proposed cloud approach have more effectiveness and when auditing shared data integrity.

16 A Novel Mechanism to Filter out Boundary Cut Detection in a Mixed Networks
Karri.Govinda Rao, Medara Rambabu

Abstract- Network is an interconnection of various systems in order to transfer the data from one system to other. Generally networks are classified into various types based on the usage and configured. Client-Server network is one among the various types of networks, where a client will always sends a request and server will always generates a response. During the communication between each and every node in the network the data is mainly divided into packets of equal sizes. However during communication there may be occur some cuts between nodes like edge cut or node cut, this mainly happens due to inactiveness of some intermediate nodes. If any node during the communication becomes inactive it will lead to a network cut between the consecutive nodes which may sometimes leads to edge cut. There are many decision tree based packet classification algorithms available in order to show the network performance of HiCuts, HyperCuts and EffiCuts.As the decision tree based algorithms are very efficient in identifying the cuts they are sometimes involve complicated heuristics for determining the fields and the number of cuts. So in this paper we have implemented a novel packet classification algorithm using boundary cutting is proposed where the proposed algorithm is able to find out the cuts that occur during the data communication between nodes. Our experimental results clearly show that this is the new algorithm which is not yet implemented in order to find the network cuts. Our simulation results clearly shows that this proposed boundary cutting algorithm provides a packet classification through 10–23 on-chip memory accesses and 1–4 off-chip memory accesses in average.