Finding the Area of a polygon



Area of a polygon

AbstractThis work discusses finding out the are of a given polygon with n number of edges or sides. The time complexity and scalability of the algorithm are analyzed. The results indicate that the algorithm is scalable and efficient.

Keywords: polygon, triangles, polygon.

I.      Introduction

Area of a given polygon with n number of edges can be found by different methods like finding the centroid of a given polygon then apply area formula to get the area of that polygon. Using the divide and conquer technique to find the area of a given polygon, this technique is used when the given problem is larger or more complex then divide the problem until we get smaller subproblems than find solutions for that smaller subproblems then combine the results of smaller subproblems. Now we obtain the result for a complex problem. To find the area for a triangle (a simple polygon) we know the formula to find the area.

Area =  ½ * base* height.     (1)

Polygon: A plane shape (two-dimensional) with straight sides.
Examples: triangles, rectangles, and pentagons.

Triangles of a Polygon: The triangles of a polygon are the triangles created by drawing line segments from one vertex of a polygon to all the others.

According to divide and conquer method to find the area of a polygon with n number of edges we will form a polygon with that edges then apply the method to divide the polygons into smaller triangles then compute the area of smaller triangles next take the areas of smaller triangles to combine the results it will give you the area of a given polygon and the below table describes the name of regular polygon and number edges that polygon consists of.

Number of edges
n
Polygon name
Total triangles
t
4

Tetragon
2
5
Pentagon

3
6
Hexagon

4
7
Heptagon

5
8
Octagon

6
9
Nonagon/Enneagon

7
10
Decagon

8
11
Undecagon

9
12
Dodecagon

10

Figure 1 type of polygons with the number of triangles.

For a given polygon of n edges draw as many possible diagonals condition here is diagonals should not overlap or intersect each other, below figures show that how to divide the polygon with n number of edges and how many triangles can be formed. To find the area of given polygon divide tetragon as shown below.

To find the area of a tetragon divide tetragon as shown below in fig2.

Figure 2 tetragon division







To find the area of a pentagon divide tetragon as shown below in fig3.

Figure 3 pentagon division



Figure 4 hexagon division


Figure 5 heptagon division

Figure 6 nonagon division
Like that divide, all other polygons, for the polygon of n number of edges n-2 triangles can be formed. Compute area of triangles then combine the areas of triangles to the area of a polygon.

II.   Algorithm development


The area of a polygon algorithm is developed under the method divide and conquers strategy. For a given number of edges, we are forming a polygon. Than to divide the polygon into triangles by using the above formula

ie., t = n -2.                      (2)
Where t is total triangles and n is the number of edges of a triangle. Once we divide our polygon into triangles it is easier to compute the area of individual triangles using formula (1). Then combine the areas we obtained that will give you the total area of a polygon using the formula

ie., TArea = (t) * (Area).                   (3)

Algorithm development steps
Step 1:
Start

Step 2:
Enter the total number of vertex and edges of the polygon.

Step 3:
Divide the polygon into number triangles as shown in figure 2,3,4,5,6.

Step 4:
Find the area of each individual triangle.

Step 5:
Combine the result or area of the individual triangle to get the area of a given polygon.

 Step 6:
End or terminate.

III. algorithm and analysis of the algorithm


Algorithm area of the polygon
Input: v, e, t, base, height
Output:  polygon_area
{

if( v = = 2 || e = = 2 )
      
        { can not form a polygon, exit}
      
else
      
         {can form a polygon}


if( v = = 3 || e = = 3 )
      
          { the polygon is triangle}
       
else
        
if( v = = 4 || e = = 4 )
        
              { the polygon is tetragon}
        
else

if( v = = 5 || e = = 5 )
             
               { the polygon is pentagon}

/* according to v and e value name/type of polygon will display*/

/*to find number of triangles, this will divide the polygon into several triangles */

For i ß 1 to e-2

Tn (i) ß e – 2

/* to compute the area of individual triangle */

For i ß 1 to  tn(i)

Tarea (i)ß (tn (i)) * (1/5 * base * height)

/*to combine the areas of individual triangles*/

For i ß 1 to  tarea (i)

Tarea ß Tarea + t area(i)

      { display polygon area, exit}

}

The algorithm analysis is as shown in the below table.

Algorithm  steps
cost
Frequency
Input: n, t, base, height
Output: area;


Read number(n)
1
1
If (n==2)
1
1
then print polygon is not fixed, exit


else print polygon formed
1
n
read base, height;
2
1
if(n==3) print polygon is triangle
1
1
if(n==4) print polygon is rectangle
1
1
if(n==5) print polygon is pentagon
1
1
if(n==6) print polygon is hexagon
1
1
if(n==7) print polygon is heptagon
1
1
/* to divide and calculate polygon area*/


tßn-2
2
n
areaß0.5*base*height*t
5
n
print area



Figure 3.1 analysis of algorithm steps.

IV.      Experimentation and profiling

A priori Analysis:
A Priori Analysis is analyzing the algorithm and calculating the computation time for the algorithm is termed as A Priori Analysis. After having made an analytical review of the algorithm, the calculated time complexity for it is of the O(nlogn).

A Posteriori Analysis:
A Posteriori Analysis is Analysis is done after the execution of the algorithm in the Computer. This Analysis is also called as Profiling.

The below figure shows the execution steps for different polygons.

Polygon
Number of sides
Execution time
Triangle
3
3.4352ms
Tetrahedron
4
5.397ms
Pentagon
5
5.785ms
Hexagon
6
6.257ms
Heptagon
7
6.853ms
Octagon
8
7.209ms
nonagon
9
7.432ms
Decagon
10
7.923ms

Figure 4.1 Different execution times for n-sided polygons

Computing the time complexity:

tworst = O(nlogn).

Tbest = Ω(K)            where K is some constant.

Below figure 4.1 shows the time complexity graph for finding out the area of a given polygon with n number of edges.
Figure 4.1 computing time complexity for polygon area algorithm


V.    Summary / conclusion

The time complexity for finding the area of a given polygon t α k and tO α nlogn. The worst time complexity of the algorithm tends to nlogn because computational or execution steps vary for a different type of polygons. After all, it depends mainly on the number of edges and how many numbers of triangles can be formed on that polygon?

References


[1]       R. Hammons, Jr., P. V. Kumar, A. R. Calderbank, N. J. A. Sloane, and P.Sol´e, The Z4- the linearity of Kerdock, Preparata, Goethals, and related codes.
[2]       J.  O'Rourke and K.  J.  Supowit.  Some NP-hard polygon decomposition problems. IEEE       Trans. Inform. Theory, IT-30:181{190, 1983.
[3]       Computer Algorithms by Horowitz E., Sahni S., Rajasekaran    S., Galgotia Publications,2001.
[4]       R.G. Dromey, How to solve by computer, Prentice Hall of India, New Delhi, 2004.
[5]       S. B. Tor and A. E. Middleditch. Convex decomposition of simple polygons. ACM Trans. Graph., 3:244{265, 1984.

Restructuring iterative algorithms under sparse conditions

Restructuring iterative algorithms under sparse conditions

AbstractThis paper discusses finding out the given sparse matrix is diagonally dominant or not, by solving a set of simultaneous equations using iterative algorithms like Gauss-Seidel and Jacobi method. The time complexity and scalability of the algorithm is analyzed. The results indicate that the algorithm is scalable and efficient.

I.      Introduction

Iterative schemes require time to achieve sufficient accuracy and are reserved for large systems of equations where there are a majority of zero elements in the matrix. Often times the algorithms are Taylor-made to take advantage of the special structure such as band matrices. Practical uses include applications in circuit analysis, boundary value problems, and partial differential equations.
   
    Iteration is a popular technique finding the roots of equations.  Generalization of fixed-point iteration can be applied to systems of linear equations to produce accurate results.  The method Jacobi iteration is attributed to Carl Jacobi (1804-1851) and Gauss-Seidel iteration is attributed to Johann Carl Friedrich Gauss (1777-1855) and Philipp Ludwig von Seidel (1821-1896).

In numerical linear algebra, the Gauss-Seidel method, also known as the Liebmann method or the method of successive displacement is an iterative method used to solve a linear system of equations and is similar to the Jacobi method. Though it can be applied to any matrix with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant, or symmetric.

Diagonally dominant matrix:



In the above matrix the row and column sums are equal hence the matrix A is diagonally dominant.

Symmetric matrix:
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Let A be a symmetric matrix. Then: A=AT. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if the entries are written as A = (aij), then aij = aji. For all indices i and j. The above shown 3×3 matrix A is symmetric.

Sparse matrix:

Sparse matrix represents all non-zero elements of the original matrix along with its row and column numbers. In the sparse matrix first column is the row number of non-zero elements and the second column is the column number of non-zero elements in the original matrix. The third column is a non-zero element. The first row first column represents the number of rows in the original matrix, the second column represents the number of columns in the original matrix and the third column represents the number of non-zero elements in the original matrix, The below figure shows the normal /original matrix A and its sparse matrix As representation.

II.   Algorithm development

Algorithm development for the iterative method is explained using the Jacobi method and gauss seidel method for both methods we are inputting a sparse matrix is as explained below.

Jacobi method:
The Jacobi method is an algorithm for determining the solutions of a system of linear equations with the largest absolute values in each row and column dominated by the diagonal element. Each diagonal element is solved for, and an approximate value plugged in. The process is then iterated until it converges.

Given a square system of n linear equations:
   
              Where:



Then A can be decomposed into a diagonal component D, and the remainder R:




The solution is then obtained iteratively via


The element-based formula is thus:


Note that the computation of xi(k+1) requires each element in x(k) except itself. Unlike the Gauss-Seidel method, we can't overwrite xi(k) with xi(k+1), as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n.
Gauss-Seidel method:
The Gauss-Seidel method, also known as the Liebmann method or the method of successive displacement is an iterative method used to solve a linear system of equations and is similar to the Jacobi method. Though it can be applied to any matrix (we are using sparse matrix) with non-zero elements on the diagonals, convergence is only guaranteed if the matrix is either diagonally dominant or symmetric and positive definite.
Given a square system of n linear equations with unknown x:
where:


Then A can be decomposed into a lower triangular component, and a strictly upper triangular component U:

The system of linear equations may be rewritten as:
The Gauss-Seidel method is an iterative technique that solves the left-hand side of this expression for x, using the previous value of x on the right-hand side. Analytically, this may be written as:

               
However, by taking advantage of the triangular form, the elements of x(k+1) can be computed sequentially using forward substitution:


Note that the sum inside this computation of xi(k+1) requires each element in x(k) except xi(k) itself.
The procedure is generally continued until the changes made by iteration are below some tolerance.

III. Algorithm and analysis of the algorithm

Algorithm jacobian -method:

initial guess xto the solution
K=0
check if convergence is reached
while convergence not reached do
for i := 1 ß n
σ=0                                                 
for j := 1 ß n
if j ≠ i then
σ=σ+aijxj(k)                                           
end if
end (j-loop)
xi(k+1)=(bi- σ)/aii
               end (i-loop)
check if convergence is reached
k=k+1
loop (while convergence condition not reached)

Convergence
The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1:
ρ (D-1R)<1
This method guaranteed to converge if the sparse matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:
The Jacobi method sometimes converges even if these conditions are not satisfied.
Algorithm Gauss-seidel method:
Inputs: A, b
Output: φ
Choose an initial guess φto the solution
repeat until convergence
for i ß1 to n 
σß0
for j ß1 to n 
if ji then
σßσ+aijφj
φiß(1/aii)(bi-σ)
check if convergence is reached
end (repeat)
                        

IV.      Experimentation and profiling

A priori Analysis:
A Priori Analysis is analyzing the algorithm and calculating the computation time for the algorithm is termed as A Priori Analysis. After having made an analytical review of the algorithm, the calculated time complexity for it is of the O(n2).

Computing the time complexity:

Gauss Jacobian-method:


Frequency
Best case
Worst case
for iß1 to n
n-1
n-1
n
iß0
1
1
1
for jß1 to n
n-1
n-1
n
If i(!=)j then
1
n-1
n
σßσ+aijφj
3
3
3
φiß(1/aii)(bi-σ)
4
n-1
n

Figure 4.1 time complexity for gauss Jacobi method.

Tbest α (n-1)*(n-1)+1+(n-1)*(n-1)+(n-1)+4*(n-1)
Tbest= 2n2+n+10

Ω (n2).

Gauss-seidel method:


  Frequency
Best case
Worst case
initial guess xto the solution K=0
1
1
1
check if convergence is reached
1
1
1
while convergence not reached do
n
n- 1
n
for i ß1 to n
n
n-1
n
σ=0
1
1
1
for jß 1 to n
n
n-1
n
if j ≠ i then
1
1
1
σ=σ+aijxj(k)
3
1
1
xi(k+1)=(bi- σ)/aii
3
1
1
check if convergence is reached
1
1
1
k=k+1
2
1
1
loop (while convergence condition not reached)
n
n-1
n

Figure 4.2 Time complexity for the gauss seidel method.

Tbest α 4n2-4n+10

            Ω(n2).

A Posteriori Analysis:
A Posteriori analysis is done after the execution of the algorithm in the Computer. This analysis is also called as Profiling.

Gauss Jacobian method

Tworst α (n-1)*n+1+(n-1)*n+n+9+4n
Tworst α 2n2+3n+10
O(n2).

Gauss seidel method:
Tworst α 4n2+13
 O(n2).

Below figure shows time complexity graph for the iterative method under different data set.

Figure 5.1 Time Complexity for Priori analysis.

Figure 5.2 Time Complexity for Posteriori analysis.

V.    Summary / conclusion

After analyzing the proposed algorithm for the given problem, the best case, average-case, and worst-case time complexity are the same using both gauss jacobian and gauss seidel method. The time complexity for the iterative algorithm will be O (n2).

References


[1]       M. T. Heath, E. Ng, and B. W. Peyton. Parallel Algorithms for Sparse Linear Systems. In Parallel Algorithms for Matrix Computations pages 83{124. SIAM, Philadelphia, 1991.
 [2]      D. P. Koester, S. Ranka, and G.C. Fox. Parallel Cholesky Factorization of Block-Diagonal-Bordered Sparse Matrices.    Technical Report SCCS-604, Northeast Parallel Architectures   Center (NPAC), Syracuse University, Syracuse, NY 13244-4100, January 1994.
[3]       Computer Algorithms by Horowitz E., Sahni S., Rajasekaran S., Galgotia Publications, 2001.
[4]       R.G. Dromey, How to solve by computer, Prentice-Hall of India, New Delhi, 2004.
[5]       Some Characterizations of numerical methods, Issue 1767.Author : Robert A. Melter Publisher: University of Maryland, 1987.